This blog post follows on from my last.
Today I shall give my reasons for predicting that computational capacity will continue growing for a while after 2020 (and why 3D chips will allow this). I shall also be explaining what that means for the development of AI and what other barriers may emerge for increasing computational capacity. It will all be back of the envelope calculations but I hope they won't be too misleading.
Making chips 3D will allow us to get around the communication barrier to increasing performance as the memory of a computer will start to move onto the processor itself. The number of transistors, the speed of the chip and the energy efficiency should go up too because building in 3D means more space, very short wires and
Of course the current technology of stacking chips on top of each other isn't great because it increases the cost of production and once you're stacking 20-30 chips its getting very difficult to do so accurately enough (not to mention the difficulty of keeping your chips cool). So what might be the next technology to take over after this?
At some point in the next 20-30 years I expect we shall start to produce 3D chips as solid (although probably porous) lumps of semiconducter. I have two suggestions as to how this might be achieved together with some difficulties each face:
X) A self assembling nanoscale crystal. The idea here is to create nanoscale particles (for instance single stranded DNA chemically attached to bits of semiconductor) which link up like a 3D tesselating pattern. The pattern is your 3d circuit.
Defects in the crystals growth are inevitable. Care must be taken to ensure that errors in the crystal's growth are self correcting and do not effect the final circuit. The nanoscale particles can be mass produced using today's technology. However, predicting the 3D structure a selection of nanoparticles will create requires great computing power.
Firstly you must be able to predict how the DNA strands will fold. The algorithms for this are still under-developed but once they are worked out we can expect them to take at most 10,000 processor hours (circa 2008) of CPU time (about £200 of energy) per folding. My source for that is folding@home. The figure may be much better as its unclear when the page was written.
Secondly you must be able to predict from that the crystaline structure grown and its electronic properties. Thats probably very hard although I'm guessing of about the same difficulty as simulating that amount of DNA folding.
In order to determine a 3D chip structure with millions of bits worths of information to describe it you'd need to tailor design millions of these nanoparticles and determine their growing properties and grow the resulting crystal at a very slow rate to reduce errors. If say you use a chip structure with a million bit description, and each DNA strand needs 10,000 folding operations to design then you'd be looking at a capital outlay of about £200 * 10,000 *1,000,000 or £2 trillion. This is an upper bound on the cost of design of such a chip (once the simulation technologies for crystal growth and DNA folding are available). The cost could be reduced by advances in computation efficiency, computer hardware or by reducing the complexity of the 3D chip.
Y) A 3D lithographic process. Cancer therapies routinely target precise volumes in a person's body to destroy tumours. Similar technology might allow us to burn a pattern of 'wires' and transistors into a 3D block of semiconductor.
There are at least two limitations here. Firstly we need a substrate which will respond appropriately when heated. Secondly we need the chip to be largely transparent to the energy beam (electromagnetic or electron stream possibly). Thirdly we need a technique that can deliver highly complex patterns to be burnt into the 3D substrate.
Whichever way you cut it its difficult to get the information into the substrate. If you want to specify the structure of the 3D chip down to 12nm then you'd need to get 508 quadrillion bits or about 72,000 terrabytes into the device somehow. Its probably best to opt for a chip design which is massively repetitive to avoid these fluxes of data.
Never the less there would be a huge advantage to specifying the structure precisely at that scale. Chips designed for a specific purpose are often between 10 and 1,000 times better at their task than multipurpose chips (The chess playing computer deep blue is an example). The amount of data moving into a 1cm cube of substrate can be approximated if we know how accurately we can focus the energy beam (probably around 12nm is feasible) and how quickly we can modulate the beam. In order to write the information to the chip in under a second (remember this things must be mass produced) we'd have to be able to modulate the beem at about 0.3 million times per second. Doing this and still accurately creating the chip amount to a formidable challenge. But it is not obviously impossible.
Now I shall address how these technologies effect the prospects for AI. The human brain is very memory demanding. If an AI we create is similar to the human brain (a big if) it will probably need as much memory. Even if it isn't it will probably be much easier to design an AI which has access to a large amount of memory. So 3D chips are going in the right direction.
Landauer's principle implies that the human brain operates at about 3% efficiency (assuming that it does not use reversible computation). However, I am not sure whether to trust this 'law' as it has recently been challenged. Interestingly it implies that the capacity of the human brain cannot be as high as some estimates put it. Clearly something has to give. If the law does hold it should be possible to match and exceed the human brain in energy efficiency. Actually for AI to be economical it wouldn't be necessary to match the energy costs as human consume much more energy than their brain needs.
How good do we need to be at making 3D circuits to match the human brain's capacity for reasonable outlay. Assuming Hans Moravec's estimate of the human brain's computational complexity is accurate then I calculate that even an accuracy of a few hundred nanometers would be enough. At 100nm we'd have both the memory and the processor capacity. As the human brain has relatively few types of neuron (<1,000,000) the complexity of the circuits won't have to be too high so the considerations about delivering design information when creating the circuits doesn't apply.
All in all it seems feasible to make 3D circuits which have the same capacity as the human brain. This should make us considerably more confident that it will happen and more so that it will happen soon.