Neuromorphic hardware – a path towards human-level artificial intelligence

Recently we have seen a slew of popular films that deal with artificial intelligence – most notably The Imitation Game, Chappie, Ex Machina, and Her. However, despite over five decades of research into artificial intelligence, there remain many tasks that humans find simple which computers cannot do. Given the slow progress of AI, for many the prospect of computers with human-level intelligence seems further away today than it did when Isaac Asimov’s classic I, Robot was published in 1950.  The fact is, however, that today neuromorphic chips offer a plausible path to realizing human-level artificial intelligence within the next few decades.
Starting in the early 2000’s there was a realization that neural network models – based on how the human brain works – could solve many tasks that could not be solved by other methods. The buzzphrase ‘deep learning‘ has become a catch-all term for neural network models and related techniques. Neuromorphic chips implement deep learning algorithms directly into hardware and thus are vastly faster and more efficient than running neural network models on conventional computer hardware. Neuromorphic chips are currently being developed by a variety of public and private entities, including DARPA, the EU, IBM and Qualcomm.

The representation problem

A key difficulty solved by neural networks is the problem of programming conceptual categories into a computer, also called the “representation problem”. Programming a conceptual category requires constructing a “representation” in the computer’s memory to which phenomena in the world can be mapped. For example “Clifford” would be mapped to the category of “dog” and also “animal” and “pet”, while a VW Beatle would be mapped to “car”. Constructing a representation is very difficult since the members of a category can vary greatly in their appearance – for instance a “human”­ may be male or female, old or young, and tall or short. Even a simple object, like a cube, will appear different depending on the angle it is viewed from and how it is lit. Since such conceptual categories are constructs of the human mind, it makes sense that we should look at how the brain itself stores representations. Neural networks store representations in the connections between neurons (called synapses), each of which contains a value called a “weight”. Instead of being programmed, neural networks learn what weights to use through a process of training. After observing enough examples, neural networks can categorize new objects they have never seen before, or at least offer a best guess. Today neural networks have become a dominant methodology for solving classification tasks such as handwriting recognition, speech to text, and object recognition.

Key differences

Neural networks are based on simplified mathematical models of how the brain’s neurons operate. While any mathematical model can be simulated on today’s computers, today’s hardware is very inefficient for simulating neural network models. This inefficiency can be traced to fundamental differences between how the brain operates vs how digital computers operate. While computers store information as a string of 0s and 1s, the synaptic “weights” the brain uses to store information can fall anywhere in a range of values – ie. the brain is analog rather than digital. More importantly, in a computer the number of signals that can be processed at the same time is limited by the number of CPU cores – this may be between 8-12 on a typical desktop or 1000-100,000 on a supercomputer. While 100,000 sounds like a lot, this is tiny compared to the brain, which simultaneous processes up to a trillion (1,000,000,000,000) signals in a massively parallel fashion.

Low power consumption

The two main differences between brains and today’s computers (parallelism & analog storage) contribute to the brain’s amazing energy efficiency. Natural selection made the brain remarkably energy efficient, since hunting for food is difficult. The human brain consumes only 20 Watts of a power, while a supercomputing complex capable of simulating a tiny fraction of the brain can consume millions of Watts. The main reason for this is that computers operate at much higher frequencies than the brain and power consumption typically grows with the cube of frequency. Additionally, as a general rule digital circuitry consumes more power than analog – for this reason, some parts of today’s cellphones are being built with analog circuits to improve battery life. A final reason for the high power consumption of today’s chips is that they require all signals be perfectly synchronized by a central clock, requiring a timing distribution system that complicates circuit design and increases power consumption by up to 30%. Copying the brain’s energy efficient features (low frequencies, massive parallelism, analog signals, asynchronicity) makes a lot of economic sense and is currently the main driving force behind the development of neuromorphic chips.

Fault tolerance

Another force behind the development of neuromorphic chips is the fact that, like the brain, they are fault-tolerant – if a few components fail the chip continues functioning normally.  Some neuromorphic chip designs can sustain defect rates as high as 25% ! This is very different than today’s computer hardware, where the failure of a single component usually renders the entire chip unusable. The need for precise fabrication has driven up the cost of chip production exponentially as component sizes have become smaller. Neuromorphic chips require lower fabrication tolerances and thus are cheaper to make.

The Crossnet approach

Many different design architectures are being pursued. Severals of today’s designs are built around the crossbar latch, a grid of nanowires connected by ‘latching switches’. Here at Stony Brook University, professor Konstantin K. Likharev has developed his own design called the “Crossnet”.

One possible layout, showing two 'somas', or cicuits that simulate the basic functions of a neuron.

One possible layout, showing two ‘somas’, or cicuits that simulate the basic functions of a neuron. The green circles play the role of synapses.  From presentation of K.K. Likharev.

One possible layout is show above. Electronic devices called ‘somas’ play the role of the neuron’s cell body, which is to add up the inputs and fire an output.  Somas can mimic neurons with several different levels of sophistication, depending on what is required for the task at hand. Importantly, somas can communicate via spikes, (short lived electrical impulses) since there is growing evidence that spike train timing in the brain carries important information and is important for certain types of learning. The red and blue lines represent axons and dendrites, the two types of neural wires. The green circles connect these wires and play the role of synapses. Each of these ‘latching switches’ must be able to hold a ‘weight’, which is encoded in either a variable capacitance or variable resistance. In principle, memristers would be an ideal component here, if one could be developed that is cheap to produce and has high reliability. Crucially, all of the crossnet architecture can be implemented in traditional silicon-based (“CMOS”-like) technology. Each crossnet (as shown in the figure) is designed so they can be stacked, with additional wires connecting somas on different layers. In this way, neuromorphic crossnet technology can achieve component densities that rival the human brain.

Near-future applications

What applications can we expect from neuromorphic chips? According to Dr. Likarev, a professor of physics at Stony Brook University, in the short term neuromorphic chips have a myriad of applications including big data mining, character recognition, surveillance, robotic control and in driverless car technology. Google already uses neural-network like algorithms to assist in things like search and ad placement.  Because neuromorphic chips have low power consumption it is conceivable that some day in the near future all cell phones will contain a neuromorphic chip which will perform tasks such as speech to text or translating road signs from foreign languages. Currently there are apps available that perform these tasks, but they require connecting to the cloud to perform the necessary computations.

Cognitive architectures

According to Prof. Likharev, neuromorphic chips are the only current technology which can conceivably “mimic the mammalian cortex with practical power consumption”. Prof. Likharev estimates that his own ‘crossnet’ technology can in principle implement the same number of neurons and connections as the brain on approximately 10 x 10 cms of silicon. Implementing the human brain with neuromorphic chips will require much more than just just creating the requisite number of neuron and connections, however. The human brain consists of thousands of interacting components or subnetworks. A collection of components and their pattern of connection is known as a ‘cognitive architecture’.   The cognitive architecture of the brain is largely unknown, but there are serious efforts underway to map it, most notably Obama’s BRAIN initiative and the EU’s Human Brain Project, which has the ambitious (some say overambitious) goal of simulating the entire human brain in the next decade. Neuromorphic chips are perfectly suited to testing out different hypothetical cognitive architectures and simulating how cognitive architectures may change due to aging or disease.

That fact that there are so many near term benefits to neuromorphic computing has led many tech giants and governments to start investing in neuromorphic chips (prominent examples include the EU’s BrainScaleS project, the UK’s SpiNNaker brain simulation machine, IBM’s “synaptic chips”, DARPA’s SyNAPSE program, and Brain Corporation, a research company funded by Qualcomm). Looking at the breadth and scope of these projects, one can now see a clear path down which humanity will achieve human level AI.  The main unknown is how long it will take for the correct cognitive architectures to be developed along with techniques needed for training and programming them.  None the less, the fundamental physics of neuromorphic hardware is solid – namely, that they can in principle mimic the brain in component density and power consumption (and with thousands of times the speed). Even if some governments seek to ban the development of strong AI, it will be realized by someone, somewhere.  What happens then is a matter of speculation.  If AI is capable of improvement, (as it likely would be if able to connect to the internet) the results could be disastrous for humanity. As discussed by the philosopher Nick Bolstrom and others, developing containment and ‘constrainment’ methods for AI is not as easy as merely ‘installing a kill switch’.  Therefore, we best start thinking hard about such issues now, before it is too late.

Follow @moreisdifferent on Twitter

Further reading:
Monroe, Don. “Neuromorphic Computing Gets Ready for the (Really) Big TimeCommunications of the ACM, Vol. 57 No. 6, Pages 13-15
http://www.artificialbrains.com/

5 Comments

Filed under Neuroscience, non-technical

5 Responses to Neuromorphic hardware – a path towards human-level artificial intelligence

  1. I don’t know where to begin…I figure there are (roughly) two approaches to AI: the algorithmic, and the neural net. The algorithmic uses a mathematical formula to direct the computer to make sense of the world. The neural net uses a black box trained by a data set to make sense of the world. Instead, I see theorists approaching AI like it is a tabula rasa that ought to learning from scratch.

    Also, there are sensations and perceptions in human psychology and physiology. Raw sensory data is processed both neurologically and psychologically into perceived data sets, so as to make sense of the environment. This is the category I would put neural-morphic chips into.

    I see more promise in streaming ASI constructed through neural nets and algorithms than I do running field computers with neural-morphic chips, although I think that each has it’s place. What I resist is AI ought to be viewed only as a tabula rasa, since we have tremendous computing power to produce optimized AI algorithms, and we have tremendous data bases to produced optimized neural nets.

    This is all the time (and patients) I have. The Singularity is coming much faster than most experts predicted. Few can understand the implications of artificial super intelligence.

    • Hi Brad,

      Thank you for your reply. Your algorithmic vs nueral network dichotomy makes sense. To some extent this dichotomy maps on to the classic ‘top down’ (algorithmic) vs ‘bottom up’ (neural network) dichotomy.

      Streaming AI makes sense given the current state of development. Intelligent agents such as IBM’s Watson (which uses primarily tools from the algorithmic approach, such as natural language processing) will be streamed to clients around the globe. The exciting thing about neuromorphic chips is that the hardware is compact enough and has low enough power consumption that it AI systems can be run locally, at least in principle.

      Most likely (and this is complete speculation on my part) the first human-level AI systems will be hybrids of the two approaches. I understand the resistance to the ‘tabula rasa’ or ‘black box’ model. The human brain itself is not at all a tabula rasa, it has built in systems for things like language learning and built in physiological responses (instincts) for things like pain avoidance, avoiding falling off cliffs, and many other things, such as avoiding incest. There are many ‘universal’ aspects to things like language and the conceptual categories humans use to understand the world that exist in all cultures across time and space and appear to be due to inherent structures in the brain determined by genetics (one possible list is called the ‘human universals‘). Some parts of the brain are tabula rasas, however, for instance parts of the visual cortex start with very random wiring configurations which are then ‘programmed’ during the first few months (babies are born blind, but learn how to see fairly quickly). If some one is born such that they do not receive visual information from the retina, parts of the visual cortex will be adopted for processing other sensory data, such as hearing or touch.

  2. While interesting, I don’t think a radically different computational hardware is needed. The human brain only activates a small % of our 10^11 neurons during any one instance of time. Using distributed computing and message passing, this is very achievable on traditional CPUs.

    We also don’t need to model the human/mammalian neocortex to achieve strong AI. In nature, intelligence is seen in a variety of neural architectures, e.g. Cephalopods have distributed ganglia of unipolar neurons, yet are still capable of processing information at high speed, learning, planning, innovating, etc. All traits of intelligent agents.

    As you mentioned, we still lack a comprehensive cognitive architecture that explains WHY these mechanisms exist in the brain and how they produce intelligence and consciousness. In my humble opinion, trying to extract this architecture from a human brain seems logically backwards. Instead of focusing on the human brain, we should be attempting to derive and test this cognitive architecture from simple connectomes, such as the C.Elegans, Drosphilia, or the brains of avian species.

    The ‘Singularity’ predictions are based in science fiction. May I ask, why do you think intelligent machines could be disastrous for humanity?

    • Hi Sam,

      Thanks for sharing your alternate way of approaching this topic. The brain does only use a small fraction of its neurons normally, but my (very limited) understanding is that this is to save energy.

      Your reference to Cephalopods is very interesting and thought provoking. I did not know that and am interested in learning more.

      Obviously, understanding the cognitive architecture of the brain is an enormous challenge. For one thing, as I understand it, its is very difficult to measure the precise connections between neurons within the brain, apart from our ability to map small areas, such as parts of the visual cortex. I think most scientists are in agreement that it makes more sense to study smaller systems first, especially those for which we can obtain a connectome. Just today I read on Wikipedia that several top neuroscientists (such as Peter Dayan) have been critical of the EU’s Human Brain Project as being too unrealistic, since it has the goal of simulating the entire human brain. Obama’s BRAIN initiative (introduced after the HBH) also is focused on the human brain. I think this partly a public relations thing, because obviously the public is most supportive of things like curing mental diseases and neurological disorders, since people who relative who suffer from such things. Saying we would ‘find the connectome of a bird’ probably would not garner the same amount of support and attention from the public.

      I should qualify my fear of AI – I am not afraid of human-level AI and certainly not afraid of the numerous technologies being developed today. What scares me is super-human AI. I suppose it is just the ‘unknown’ factors that scare me. We literally cannot predict what superintelligent AI will do, because we literally lack the mental ability to do so. The reason a ‘kill switch’ doesn’t work is that it requires human intervention, and humans can be manipulated. I have not read Nick Bolstrom’s book on this subject, but I found his interview on the EconTalk podcast very fascinating.

      The thought of a superintelligent AI that has access to the internet scares me a lot, because there is growing evidence that most computer systems, even the ones for infrastructure and defence, are insecure. An AI could essentially act as a super hacker, capable of hacking into virtually any computer system and disrupting an adversaries communications, infrastructure and financial markets. The thought of a superintelligent AI being able to hack into and control military systems is particularly scary. (Just today I saw someone post this on Facebook : http://www.foxnews.com/politics/2015/06/01/congress-us-military-highly-vulnerable-to-cyber-attacks/)

      Thinking much more speculatively, superintelligent AI can be thought of as a very powerful weapon, because they could be used to hack into enemy systems and coordinate military strategy. Just as countries raced to develop nuclear weapons in the 20th century, often while neglecting safeguards, I fear countries might race to build superintelligent AI in the 21st in a similar fashion, without fully evaluating all of the risks. Right now I don’t see this as the goal of DARPA’s neuromorphic research programs (which appear to be more aimed at things like surveillance and robot control) , but is something to be concerned about.

      • Thanks for the robust response. I think the brain project may bear some value, possibly in terms of being able to simulate the effects of modulating neurotransmitters by means of endogenous compounds for psychopharmacology.

        I know this may be controversial, but I think we are drastically overcomplicating how cognition and consciousness work. I wrote a short essay describing some of the points I made in more detail if you’re interested https://medium.com/@samsachedina/approaching-convergence-d57ebc08f88f

        Regarding your fear of superhuman AI:

        Cybersecurity, or a lack thereof, is a separate issue. A machine would be faced with the same combinatorial explosion that encryption algorithms generate. While you could argue a machine would be able to try multiple strategies in parallel, sophisticated networks monitor and restrict anomalous/high frequency access behavior. A machine would suffer from the same restrictions hackers controlling a bot-net of tens/hundreds/thousands of independent machines do. Once these controls fail, humans would exploit them, just as a machine directed by a human would.

        The weaponization of intelligent machines by our government or any government is one of my biggest fears. Instead of using AI to benefit humanity, the powers that be focus on war, surveillance, and other applications which restrict our freedoms. That’s why it’s important to beat them to the punch 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *