Past developments and present capabilities
Small sample sizes, selection biases, and--above all--the inherent unreliability of the subjective opinions elicited mean that one should not read too much into these expert surveys and interviews. They do not let us draw any strong conclusion. But they do hint at a weak conclusion. They suggest that (at least in lieu of better data or analysis) it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century, and that it has a non-trivial chance of being developed considerably sooner or much later; that it might perhaps fairly soon thereafter result in superintelligence; and that a wide range of outcomes may have a significant chance of occurring, including extremely good outcomes and outcomes that are as bad as human extinction.
Paths to superintelligence
Artificial intelligence: "Recursive self-improvement." A successful seed AI would be able to iteratively enhance itself: an early version of the AI could design an improved version of itself, and the improved version--being smarter than the original--might be able to design an even smarter version of itself, and so forth.20 Under some conditions, such a process of recursive self-improvement might continue long enough to result in an intelligence explosion--an event in which, in a short period of time, a system's level of intelligence increases from a relatively modest endowment of cognitive capabilities (perhaps sub-human in most respects, but with a domain-specific talent for coding and AI research) to radical superintelligence.
Whole brain emulation ("uploading"): Intelligent software would be produced by scanning and closely modeling the computational structure of a biological brain. The whole brain emulation path does not require that we figure out how human cognition works or how to program an artificial intelligence. It requires only that we understand the low-level functional characteristics of the basic computational elements of the brain. No fundamental conceptual or theoretical breakthrough is needed for whole brain emulation to succeed. In general, whole brain emulation relies less on theoretical insight and more on technological capability than artificial intelligence.
Biological cognition: A third path to greater-than-current-human intelligence is to enhance the functioning of biological brains. The generational lag in germline interventions means that progress could not be nearly as sudden and abrupt as in scenarios involving machine intelligence.
Brain–computer interfaces
Networks and organizations: The gradual enhancement of networks and organizations that link individual human minds with one another and with various artifacts and bots. The idea here is not that this would enhance the intellectual capacity of individuals enough to make them superintelligent, but rather that some system composed of individuals thus networked and organized might attain a form of superintelligence--what in the next chapter we will elaborate as "collective superintelligence."
The superintelligent will
The instrumental convergence thesis: Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.
Self-preservation
Goal-content integrity
Cognitive enhancement
Technological perfection
Resource acquisition
Is the default outcome doom?
The first superintelligence may shape the future of Earth-originating life, could easily have non-anthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition. If we now reflect that human beings consist of useful resources (such as conveniently located atoms) and that we depend for our survival and flourishing on many more local resources, we can see that the outcome could easily be one in which humanity quickly becomes extinct.
The treacherous turn: While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong--without warning or provocation--it strikes, forms a singleton, and begins directly to optimize the world according to the criteria implied by its final values.
The control problem
We can divide potential control methods into two broad classes: capability control methods, which aim to control what the superintelligence can do; and motivation selection methods, which aim to control what it wants to do.
It is important to realize that some control method (or combination of methods) must be implemented before the system becomes superintelligent. It cannot be done after the system has obtained a decisive strategic advantage.
Motivation selection can involve explicitly formulating a goal or set of rules to be followed (direct specification) or setting up the system so that it can discover an appropriate set of values for itself by reference to some implicitly or indirectly formulated criterion (indirect normativity). One option in motivation selection is to try to build the system so that it would have modest, non-ambitious goals (domesticity). An alternative to creating a motivation system from scratch is to select an agent that already has an acceptable motivation system and then augment that agent's cognitive powers to make it superintelligent, while ensuring that the motivation system does not get corrupted in the process (augmentation).
Acquiring values
The principle of epistemic deference: A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence's opinion whenever feasible.
Indirect normativity applies this principle to the value-selection problem. Lacking confidence in our ability to specify a concrete normative standard, we would instead specify some more abstract condition that any normative standard should satisfy, in the hope that a superintelligence could find a concrete standard that satisfies the abstract condition. We could give a seed AI the final goal of continuously acting according to its best estimate of what this implicitly defined standard would have it do.
Will the best in human nature please stand up
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.