In this blog post I shall outline my reasons for believing that at some point during the 21st century AI of human level intelligence (and probably significantly above) will be created.
I should start by acknowledging the history of predictions regarding AI and their over-optimism. In the 1950 and 1960s when the computer was still new many people studying AI predicted that AIs would reach human parity within a generation. Again in the 1980s similar predictions were made. So I shall first try to explain why I think we have not yet achieved this.
Todays computers are general purpose machines. Given a description of another type of computer then (with sufficient memory and time) your laptop could simulate it. Brains although largely digital have many analog properties too. A brain is capable of many types of computation but is wired to be most effective at certain relevant tasks. Never the less we have no reason to believe that brains could not in principle be simulated by a sufficiently powerful computer.
AI researches in the late 20th century effectively assumed that by using sufficient ingenuity and by leveraging certain perceived advantages of the computer that we could dramatically reduce the amount of computation necessary to make an AI. There may be some advantages to the silicon medium but in this blog post I shall assume that any such advantages can not be used to speed up the development of AI. This can only make the prediction of this post more sound (because it is a pessimistic assumption).
If a computer could simulate a human brain then it could clearly also run a human level AI (simulating a modified human brain would be one way to achieve this). So the question effectively becomes "How fast a computer do you need to simulate a human brain in all its essentials?" I am assuming here that there is no new physics in the human brain relevant to the efficiency with which it functions. This is a well justified assumption on the basis of the observed capabilities of people and our understanding of complexity theory.
Estimating how much computation the human brain achieves is a difficult task. We may miss ignore brain structures that do important computation or we may include neuronal behaviour that is not relevant to thinking. Estimates of the number of synapses in the human brain vary but 10^14 is a ball park figure. However, it is far from clear that all of these synapses are engaged in useful computation all of the time.
The only way I see out of this quandary of estimation is to compare the capabilities of efficiently programmed computers directly with sections of tissue in the human brain. I haven't done this myself but others have. In particular Hans Moravec of Carnegie Mellon University has compared advanced visual perception systems with a portion of the human retina that performs roughly the same computations.
On this basis Hans calculates that our current personal computers (in 2008) are roughly equivalent to the 0.1 gram brain of a guppy.
Hans may be talking about general purpose computers here or he might be talking about custom made machines. The later can be 10 or even a 100 times faster on particular tasks. Pessimistically I shall assume that Hans is already assuming the amazing speed ups one can get by designing computer chips for specific tasks.
This calculation puts a good 2008 desktop (with carefully designed chips) at roughly 1/10000th the power of the human brain. Hans then goes on to comment that at the current rate of Moore's law that in 20 to 30 years we should expect desktop computers (defined as those costing around $1000) to supersede the capabilities of the human brain.
When human level AI becomes economically viable it will be created (I shall assume pessimistically that it won't be created earlier).
I shall look at capital and maintenance costs. When looking at the cost of human labour I assume Western world economics because if AIs are economic anywhere they'll be created for use there:
Capital costs: The cost of creating an AI lies in creating the hardware and training the AI to the requisite standard. The second cost (thought potentially greatly expensive) is insignificant because AIs can be cloned. Spending £100 million on educating an AI amounts to only £100 per AI if your copy the resulting intelligence 1 million times. Humans require education (not to mention food and shelter) for the first 18 years of their lives. A conservative estimate of the costs involved (by the state and by parents) would be £300,000 (Western world here). Currently the capital cost of computer hardware with brain equivalence would be roughly £5,000,000. This is roughly 15 times higher than the 'capital cost' of a human (or 10ish years assuming Moore's law holds).
Maintenance costs: The cost of feeding and sheltering etc. a person is roughly £10,000 per year. Current life expectancies of computers are roughly 7 years. The cost of replacing the hardware every 7 years is a maintenance cost. The cost in energy of running a desktop PC is roughly £280 per year. Assuming (pessimistically again) that hardware is replaced every 5 years and costs the full original whack each time gives an additional cost per year of 1/5 of the original capital expenditure. This calculation indicates that an AI of human level intelligence would become economically viable (in maintenance terms) in roughly 15 years time (Moore's law).
This gives an estimate of 15 years and a substantial margin of error for achieving human level AI by the end of the century. Note though that people take a couple of decades to mature mentally. The same may well be true of AIs.
I shall address a selection of plausible objections that one might make to my line of argument:
Computers would need bodies to really learn: Many people have limited mobility or lack various of the senses and still manage to develop intellectually. This sort of level of limited mobility and limited sensory function are within the grasp of future robotics technology (one might even argue todays).
Moore's law won't hold up: There are various manifestations of Moore's law. The important one for our purposes is the trend in MIPS (million instructions per second). This version of the law is currently holding and is expected to hold until around 2016. After this there are claims that the law will break down. Firstly it must be noted that every five years or so someone claims that Moore's law will fail in 10 years time. Since Moore's law was postulated they have been wrong. Secondly even if Moore's law fails (today) and drops to a doubling every 6 years (a radical change) this would only prolong the realisation of human level AI (on the desktop) into the 2090s.
People won't allow it: This is a really naive idea. Once the capital and maintenance costs of human level AIs have decreased below those of people they (the costs) will very quickly become insignificant. A decade after human equivalence is reached AIs will be 4 times cheaper than people to employ (with a pessimistic Moore's law) or possibly even 32 times cheaper. Even with a global agreement against the use of AI the economic pressure will ensure that AIs are created. Rogue states, corporations and criminals would have too much to gain from flouting any international law.
You need software too: Even if our ingenuity is not up to creating an AI from scratch we could always copy the way our own brains do it. Although neuroscience still has a long way to go it is not unreasonable to expect a working understanding of the mechanism of the human brain (if not how that mechanism translates into thought) by the end of the century.
Hans Moravec's article is entitled "Rise of the robots" and appeared in Scientific American's special edition on Robotics March 2008.
This blog post is a back of the envelope calculation. I may well come back and edit/correct it so be warned!