Blog: Superintelligence — 3 Takeaways

Go to the profile of Thom Soh

Superintelligence aims to imagine what the world will be like when machines are superintelligent — that is, when machines are able to surpass the combined human abilities in general intelligence. Bostrum puts forth that the achievement of superintelligence is a matter of when and not if, given the number of pathways available to that end. The world of the superintelligent is one marked by uncertainties, as humanity will cease to lose control over its own fate.

  1. Our Last Problem

This is quite possibly the most important and most daunting challenge humanity has ever faced. And — whether we succeed or fail — it is probably the last challenge we will ever face.

When we are (finally?) able to create that which is more intelligent than our combined abilities, the solutions to humanity’s problems will be out of our hands, for better or for worse. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so will the fate of humanity be in the hands of the superintelligent being.

Will we follow in the path of the gorillas, as mere second thoughts in the wider ambitions of humanity, or will we able to be able to create a superintelligent being that can have our best interests at heart, whatever these interests may be?

2. One Chance at Success

In practice, the control problem — the problem of how to control what the superintelligence would do — looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.

If humanity has an ace up our sleeve, it is that we would most likely be the architects of the superintelligent being , presenting us with significant influence as we seek to ensure that it does not destroy us (a benefit that the gorilla was not blessed with).

However, given that any such being would be able to out-strategise us, our attempt at reigning it in are largely limited to the pre-superintelligence stage, where we can take steps to imbue the ‘right’ values, create fail-safes and the like. This is a mammoth task, one that humanity has but one shot at. The chances of attaining success in the matter are ostensibly low, given that (1) we don’t know what we don’t know, (2) problems with coordination and/or collective action are likely to emerge and (3) human-error can and must be expected to happen given the complexities of the matter.

There is much to be pessimistic about, and the road ahead is one full of danger.

3. Philosophy with a Deadline

We could postpone work on some of the eternal questions for a little while, delegating that task to our hopefully more competent successors — in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors. This would be high-impact philosophy and high-impact mathematics.

The creation of a superintelligent will be where philosophy intersects with physics (if it hasn’t already), given that many of the problems that may arise with superintelligence, and many of the solutions to the control problem will be rooted in philosophy. For instance…

(1) Would it be a mind-crime if the superintelligent, in the quest to achieve a desirable outcome for humans, runs simulations of human-like conscious beings, killing them off mercilessly when it is done with the process?

(2) If we were to create a superintelligent being with a goal of maximising happiness, how would happiness be measured? Would the happiness of non-humans be weighted equally? Societal versus individual happiness? Short-term versus long-term happiness? How would we even code for happiness?

The philosophical ramifications, and the opportunities provided by a superintelligent, is fascinating. An AI-complete world would be one that has (hopefully) solved many of the world’s toughest philosophical questions. If it has not, the point of reckoning looms.

Source: Artificial Intelligence on Medium