An organization that won the Nobel Prize in 2017 for its work to eliminate nuclear weapons is sounding the alarm about the possibility of artificial intelligence leading to unintended wars.
Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons, is worried that hackers could breach A.I. technologies that are used in nuclear programs or that they could use A.I. to dupe countries into launching attacks. For example, deepfakes, or realistic-looking computer-altered videos, may be used to “create a perceived threat that might not be there,” she warns, prompting governments to overreact.
Fihn told Fortune that she wants to convene a meeting in the fall with nuclear weapons experts and some of the leading companies in A.I. and cybersecurity. Participants in the off-the-record event, she said, would produce a document that her group would use to inform governments and others about the danger.
“Some companies are more powerful than governments today in terms of shaping the world,” Fihn said. She wants to “engage them in thinking about how they can contribute to a more sustainable world, one that reduces the threat of extinction.”
So far, some leading companies in A.I. including Microsoft and Google’s DeepMind A.I. unit have expressed interest, Fihn said. Microsoft and DeepMind declined to comment to Fortune.
She said that some companies are “a little bit intimidated by the issue,” believing it to be “very political.” That said, she thinks these companies recognize their power.
A.I. is often described as a huge benefit to humanity, potentially leading to more effective healthcare treatments or reducing auto accidents with the help of self-driving cars. But there is also a darker counter narrative that it can also be used by criminals and, possibly, by nation states to sabotage adversaries.
“We don’t want to advocate for any restrictions on A.I.,” Fihn said. “But this technological development is happening—we have to be very careful.”
Fihn, who is from Switzerland, cautions that the secrecy involved in nuclear programs makes it difficult to know just how much A.I., if any, has been incorporated into them. What is known, however, is that A.I. can used be to target nuclear arsenals or the people who manage them.
“This is new stuff for us to think about,” Fihn said. Does the rise of A.I. pose realistic dangers, “Or is our imagination going wild?”