You are here

Novelty

Novelty = the inherently unknown

From a negative spiral of failure to the useful term reverting?

While I consider myself an AI and System & cybernetic researcher, I can’t undo myself from the impression that both domains have failed. There are genius insights that arose from the domains, but the impact on society is marginal. I consider it a failure if we consider the expected impact on society and the current impact.

Today I read a story that again gives me the feeling of failure. There is a nice article in wired about technological evolution: The interview is just a small summery, but I get the impression that much is reinvent the wheel. Accept the mentioning of Stuart Kauffman, I didn’t see any other of the great authors in the summery nor did I read it between the lines.

We fail as a science if people reinvent the wheel, it means we did something wrong, that we didn’t make the insights become autonomous entities in our society (e.g. strong memes). I think the article actually gives us an idea why we failed in respect to the economics of our science. Let me quote a part of the article:

…That was the difference between Tim Berners-Lee’s successful HTML code and Ted Nelson’s abortive Xanadu project. Both tried to jump into the same general space—a networked hypertext—but Tim’s approach did it with a dumb half-step, while Ted’s earlier, more elegant design required that everyone take five steps all at once.

Did we try to do it to good and therefore failed? Do we need dumber half-steps? Notice how in this article the whole idea of variety by natural selection is actually expressed by a term with more negative connotation: “To create something great, you need the means to make a lot of really bad crap.”

Autonomy Mastry Purpose

Topic: 

Make AI stand for Artificial-or-Aggregative Intelligence

Topic: 

Several different thoughts made me come to the idea to use AI as a way to blur the boundaries between what is artificial and what is aggregative. Lately I’ve come to the impression that artificial is the condition to aggregate intelligence, but it has no direct relation to intelligence itself. Thus AI as used for strong AI, should stand for Aggregative Intelligence. Let me explain the thoughts that got me to this point.

In one of my working papers on the global brain I ran into trouble to find a good abbreviations. The idea is to update IT-alignment by a focus on crowdsourcing. The idea is as old as my research on novelty regulation. We don't need to build intelligence: we can query it. So my first idea was Intelligence Querying, which would become IQ-alignment … ok, that would be a good way to create confusion. If I'm going to create confusion it could be better have a good reason for it. So I decided to got for AI-alignment, where AI stands for Artificial-or-Aggregative Intelligence.

Crowdsourcing works by aggregating amateur intelligence to expert intelligence. One of the experiments of my former advisor created an aggregation from kinder-garden intelligence to adult intelligence. My own novelty regulation model is to get from non-intelligence to some intelligence by aggregation. Indeed, their is only aggregation of intelligence. It doesn’t seem a satisfying answer.

In another working on bootstrapping I’m transforming the notion of Artificial by Herbert Simon to a more general term. He defines Artificial as human made as opposed to spontaneous, but if your investigating the emerging of intelligence, it should become system made as opposed to spontaneous. A system doesn't need to be intelligent to make other parts, not even for self-replication. In this sense what does artificial actually add as value?

We can understand the artificial by looking how we create laboratories to investigate a specific hidden feature: by creating an artificial condition so that the feature comes observable. In our attempt to understand the hidden feature, we mechanize the artificiality to make the feature more robust and autonomous (for more research on this see Bruno Latour). Indeed, artificial intelligence is exactly what we got with expert systems. They are making the artificial conditions to mechanize some intelligent processes. Ironically it seems very improbable that such research will lead to understanding intelligence itself. As intelligence is aggregated.

It may seem as artificial is quit trivial feature, but we don’t see it that way. Artificial is what makes novelties emerge, not just intelligence. Could we say that the artificial is not important? How would the world look if we only know how things work and not have created technology with it? It wouldn’t be satisfying either. We need both, hence I suggest to create a blur by making AI stand for Artificial-or-Aggregative Intelligence. We may say that our brain is the artifact that creates the condition to aggregate intelligence. I like to make a metaphorical gag with it:

Our minds are like whirlpools, while our brains are like toilets. A day is nothing more than a very long flush. To wake up, we flush our brain, and it stops running until we fall asleep.

Investigating innovation processes AS-IS, the methodology

After giving several presentations in September, we have begone to visit companies to investigate how innovation is happening there. The idea is to find the AS-IS situation and compare it to some of the literature. I won't be writing about the specific cases, the companies and the people we work with should be able to evaluate that first. I can talk about the methodology.

Disruption of nations by globalization

In Friedman's book "The World Is Flat" you find the expression "creative destruction on steroid". Recently I've been hearing this term more frequently, probably not all have read the book, but the expression does seems appeal to many. In a recent debate on service science we got from the economic crisis to that expression. You could understand this as "flattening is a disruptive innovation". It seems to be a disruption of nations, by globalization and the rise of a new order. Let me outline what I mean with that.

Open Innovation as false-negatives

Yesterday I had an interesting talk with Wim Vanhaverbeke who is an expert on open innovation. For example he is a coauthor of Open Innovation: Researching a New Paradigm. To give some idea of the talk I would like to tell about open-innovation as false-negatives. One of the many thing I've learned yesterday.

First Draft on Enterprise Innovation planning (EIP).

When you stand on a crossroad of two disciplines, how do you make one aware of the potential leverage from the other? It may be easer when one uses the jargon from the discipline you try to reach at, in this case management. The goal is to manage radical change by creating something similar as Enterprise Resource Planning (ERP) for Innovation (so EIP). It is the intention to make a project/experiment for the research on novelty during my PhD.

Working papers taking up more time

Topic: 

Last moth has been full of stress. Lucky for me a deadline got postponed several times and I've been working on a submission of the open IT innovation call until the 16th. The week after it I was in Viena on the ECCS conference, but with no wifi on in my room. This last week I've been updating all my other stuff. Many nice blog topics are to late now, but let me write something about the papers.

From Networked Business to radical service

Recently I'm getting more into IT management, thanks to the mentoring of the professors at MOSI (E. Torfs and E. Vandijck). During reading in the book "Corporate Information Strategy and Management" I came across the concept and examples of B2B, what stands for "Business to Business". The concept C2C (customers), like ebay was of course known to me. The B2B example was Covisint. Using the web for B2C may be the most strait forward e-commerce example.

The autonomy of useful knowledge

Topic: 

It is not evident to change once view of the world, especial when its related to the concept of free will. In the past years I have been busy reviewing the concepts of intelligence. It is one of those concept still full of romance and mysteries. I've found the use of a metaphor helpful to opening up. Try to imagine how old societies behaved toward those issues they could not understand (the moon, the sun, the seasons), in most cases a human like projection and behavior with supreme powers was given.

Pages

Subscribe to RSS - Novelty