« The free market and the left wing | Main | Creating software for AIs »

30 March 2008

Comments

Antisophie

Fascinating, the things that you doodle on the back of envelopes!

A lot is, granted that the nature of entire enterprise, assumed.

1. However, one curiosity I have is this; doesn't Moore's law apply with regard to computation under the media of Silicon?

In other words, if we were to, say, find another medium for those various processes, such as some polymer, water, or carbon, would our parameters estimating the physical limit of processing capacity change, given the change in medium?

I refer here to media which would be superior to silicon (say for instance, carbon).

2. Let's not forget (as you definitely touched upon) the software required; those everyday processes we have such as perception (that's just one example of the things that we do as computing human beings) are very intricate; when we look at a sphere, we understand the same object by virtue of many senses. I can know my friend by how she smells, how she looks, how she sounds, and how she feels, if, for there was any time one of those senses were not to work; I would rely on another to learn how to recognise a thing. Perhaps I'm just being naive, but software simulating brain sounds like a very interesting project; we may, for instance, come up with algorithms that may replicate behaviours like learning or recognition; but may be done in such a way that is entirely different to our own. Does that count as the same 'behaviour'? ; The process comes up with the same result, but is realised through multiple ways.

All the same, very interesting stuff!

Antisophie x

Harry

Hi, just a personal view on the AI human equivalent theory. I know nothing of this subject only that of how I see it as to what I have absorbed over my past 58 years.
I can see AI achieving human capabilities as far as high computing capacity i.e. choosing the right move in a chess game. The problem as I see it a machine will never have feelings or emotional choice. Have good or bad taste. The absence of a human soul as humans see it, with feelings such as jealousy, pride, guilt, etc, etc. As has been said maybe you wouldn’t want AI with these qualities or flaws maybe. But to get to the main point of this reply I think that AI with all these Human emotions will never happen, not in 20 years, not in a thousand years. Computational power yes that is just how many chips you can string together, but giving this silicon or polymer feelings
Is another ball game entirely. There is something other than small electrical signals going on in the human brain does anyone know what it is? Maybe when a system is wired up that has enough contacts and wires this may automatically attain these qualities? Who knows?... H..

Barnaby Dawson

To Antisophie:

1) Moore's law has functioned over several crucial changes in technology. The question is "Can it surive a change away from 2D silicon wafers?"

2) I suspect that truly intelligent computers would have very different behaviour to humans and so many direct comparisons might be dubious. However I do expect that AIs would be good enough at a variety of tasks to be economically useful.

To Harry:
We base our belief that other people have minds on our observations and interactions with them. Now if the laws of physics are probabilistically Turing computable (as we currently believe) then it is possible in principle to build AIs that would in every way behave like people. To believe that such AIs didn't have emotions would seem arbitrary and unjustified to me. The alternative is to believe in new laws of physics that are not Turing computable which would be a wildly speculative idea.

It may well be hard to create AIs with emotions but I am thoroughly convinced that it is possible in principle.

Stephen

One weakness in the notion of equivalent computation power to simulate the brain will lead to intelligent machines is the inference that once we can simulate the brain, the simulation will be able to think. This assumes that we understand how to feed input to the brain, and read back the brain's outputs. If one considers the premise of Jeff Hawkin's book On Intelligence, which states that the brain is a powerful prediction engine, then experience will be required to make predictions. Developing that experience will require the ability to feed input into and get output from the artificial intelligence.

It seems like a very long stretch to go from simulating the individual neurons of the brain, to actually having intelligence. In addition, means of providing meaningful stimulus to the brain, as well as means for the brain to manipulate its environment will be required. Whether this manipulation is purely in the information domain, or via a robotic body likely doesn't matter too much in the beginning. Developing the brain/world interface is an important step required to move towards developing a human level AI based on the brain.

I am not trying to argue that human intelligence can not be obtained, just that reaching the computational equivalence of the brain itself is not sufficient. Means of encoding and interpreting signals to and from the "brain" will be required to develop any form of real intelligence.

Barnaby Dawson

Stephen: You are quite right to point out the difficulty of simulating embryology, early learning and the difficulties inherent in robotic body design.

I think we will be able to simulate the embryological, and early learning well enough to create intelligence because:

1) The brain is incredibly plastic and is able to adapt to many types of damage to sensory apparatus (if we have a good neurological model the AI should inherit this plasticity). This I expect would extend to the imperfect senses we would likely to be capable of grafting on to an AI.

2) Many children are born lacking in one or several of the major senses. However, they are still capable of developing intelligence.

3) Neurological understanding seems to be developing at a fast rate and the ability to simulate portions of nervous tissue should allow us to concentrate our efforts at aspects of the brain more relevant to thought.

I suspect that this is not actually how AI will be developed. My mention of it is instead intended to provide an upper bound on the complexity of the design process for a functioning AI. I'm saying we could do it by understanding enough about how the brain functions.

Stephen

Barnaby: I do agree with your points.

1) People who have lost their sight have been able to regain it by seeing with their tongue and other body parts.

2) As long as there is at least one avenue of input to the brain, then learning can happen. Definitely no need for all the senses to be active.

3) Examples such as the monkey controlling a robot with brain waves, and being able to remotely control a rat's movement via electrodes in brain shows ability to read from and write to the brain.

There is also no doubt that Moore's law is driving increases in computational power. That increase matched with Hans Moravec's graphs shows we will likely reach computational equivalence to the human brain in the not so distant future. I also agree that once computational equivalence for the brain is reached, the computation requirements for the various brain inputs and outputs will be reached almost immediately after that.

So, technologically speaking, a platform from which a thought engine could be created is feasible. But the engineer inside me is trying to imagine how to debug such a system to bring it up. From an engineering perspective being able to analyze the workings and functionality is desirable. But this ability to analyze the thought process really isn't necessary for a thought process to occur. Since I can't accurately debug my own thought processes, and I haven't met anyone who can do better, that is a clear indication that this is more of a nice to have requirement, rather than a must have requirement.

The part I still can not decide is whether such a system would be intelligent or just seem intelligent. If the intelligence is built up based on biological models, then one would have to conclude it is intelligent. If it is built up via gradual engineering improvements, then seems intelligent makes more sense. The difference at some point in time will be too fine to tell the difference.

The comments to this entry are closed.