Blog: AI Warnings
I enjoyed listening to Sam Harris’ latest podcast with George Dyson, Alison Gopnik, and Stuart Russell. In particular, the last of the interviews with Stuart Russell discussed some of the threats of AI and what we need to watch out for. I agree with many of the points raised in the show and it’s worth a listen. A few things about AI development drawbacks are here and are different then other existential threats we’ve faced before:
- We won’t see it coming. New apps and memes spread much more quickly now. Think how long it took Facebook to get to a billion users vs TikTok.
- It won’t look like anything we can clearly recognize. There won’t be some AI powered Godzilla emerging from the ocean to wreak havoc on cities.
- People will be the first to use it against each other and likely lose control. For example, a smart DOS attack aimed at an individual vs a company. A doxxing bot that puts of ads mimicking you. Then it goes against those who built it…
- We can’t unplug it. Nefarious AI might stop working on the surface, but we won’t know if it has ever really gone away.
- We might not ever know it’s there and just feel the effects of seeming unlucky.
- It will know how to play us because it has much more experience with us than we do with it. GANs and billions of other test subjects helped it learn.
So what can we do? We need to first commit to not doing with AI bad things. We need to keep Human in the Loop interaction, especially when we’re trying to influence other people using AI tools. We can’t let AI make the final decision on things that affect humans. It should always have to explain itself to us. While these items won’t guarantee avoidance of bad outcomes, at least they’d give us a way to fight against what some think of as the inevitable.