Three Paths to Superintelligence

In Superintelligence: Paths, Dangers, Strategies (first published by Oford University Pfress in 2012i), Nick Bostrom focuses on three main paths to superintelligence:

1. The AI Path: In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation Path: Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron’s synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body. If your memories, thoughts and capabilities arise from the connectivity structure and patterns/timings of neural firings of your brain, then your consciousness should awaken in that synthetic body.

The unique strength of this approach is that humanity would not have to understand how the brain works. It would simply have to copy the structure of a given brain; that is, to a sufficient level of molecular fidelity and precision.

3. The Neuromorphic Path: In this case, neural network modeling and brain emulation techniques would be combined with AI technologies to produce a hybrid form of artificial intelligence. For example, instead of copying a particular person’s brain with high fidelity, broad segments of humanity’s overall connectome structure might be copied and then combined with other AI technologies.

Although Bostrom’s writing style is somewhat dense and dry at times, the book covers a wealth of issues concerning these three paths, with a primary focus on the control problem. The control problem is the following: “How can a population of humans (each whose intelligence is vastly inferior to that of the superintelligent entity) maintain control over that entity?” When comparing/contrasting human intelligence to that of a superintelligent entity, it will be (analogously) as though a group of, say, dung beetles are trying to maintain control over the human (or humans) that they have just created.

* * *

Nick Bostrom is a professor at Oxford University, where he is the founding Director of the Future of Humanity Institute, a multidisciplinary research center which enables a set of exceptional mathematicians, philosophers, and scientists to think about global priorities and big questions for humanity. He also directs the Strategic Artificial Intelligence Research Center.

To learn more about him and his work, please click here.

I also highly recommend Marty Neumeier’s Metaskills: Five Talents for the Robotic Age (2012).

 

Posted in

Leave a Comment





This site uses Akismet to reduce spam. Learn how your comment data is processed.