Blog: Artificial Intelligence: New Threats to International Psychological Security – Sputnik International
The scholars focused on combating the malicious use of AI by terrorists. Their findings were published in the journal Russia in Global Affairs.
Dangers of Malicious Use of AI
Much has been written on the threats that artificial intelligence (AI) can pose to humanity. Today, this topic is among the most discussed issues in scientific and technical development. Despite the fact that so-called Strong AI, characterised by independent systems thinking and possibly self-awareness and will power, is still far from reality, various upgraded versions of Narrow AI are now completing specific tasks that seemed impossible just a decade ago.
The positive uses of AI – in healthcare, for instance, – are already undoubtedly beneficial. But in the hands of terrorists or other criminal organisations, increasingly cheap and sophisticated AI technology could become more dangerous than nuclear weapons.
Scientists from different countries are now studying the threats that the malicious use of AI can pose to society in general or in specific areas of human activity, such as politics, the economy, military affairs and so forth. However, the threats posed directly to IPS have never been narrowed down to a separate area of study before.
Meanwhile, the prospect of AI use aimed at destabilising international relations via high-tech information and psychological warfare against people is obviously becoming a greater danger.
The researchers proposed a new classification of threats for the malicious use of AI based on a range of criteria that include, among other things, territorial coverage and the speed and form of propagation. Applied, this classification can help scientists find ways to counter these threats and develop tools to respond to them.
“AI technology will become cheaper in the future; this might elevate the threat of terrorist acts to a fundamentally new level”, said Darya Bazarkina, Professor at RANEPA Institute of Law and National Security’s Department of the International Security and Foreign Policy of Russia.
“Terrorists can use chatbots to design messages about nonexistent events and convince potential victims to attend them. In order to counter these threats, we need to raise the public’s awareness of these threats and educate people to be wary of long-distance communication with people they have never met. Another possible solution is to introduce certification of public events while verifying the information published about them. Of course, the technical experts’ task will be to protect databases containing information about events and the mechanism for certification”, she added.
The current level of technology, including AI technology, made it possible for the researchers to identify a range of fundamentally new threats to IPS.
Using Deepfakes and Fake People Technology to Provoke International Tension
US technology company NVIDIA recently shared the results of their generative adversarial network designed to independently generate human faces (so-called fake people technology). Based on an infinite collection of images of real faces, this network generates high-quality images of nonexistent people with various cultural and ethnic features, emotions or moods.
Other developers will likely replicate the process eventually. At the same time, criminals can also use similar images to carry out different kinds of provocations that can only be recognised by a society possessing systematic polytechnic knowledge.
“Deepfake is a technology for synthesising human voice and image”, said Professor Evgeny Pashentsev in an interview with Sputnik. Evgeny Pashentsev is a Professor at the Lomonosov Moscow State University and Professor and leading researcher at the Institute of Contemporary International Studies of the Russian Foreign Ministry’s Diplomatic Academy.
“It has already been used to generate videos featuring world leaders, including US president Donald Trump and President Vladimir Putin. Deepfake videos can influence the behaviour of large target groups and can be used in psychological warfare to provoke financial panic or war”.
Sentiment analysis is a class of content analysis methods used in computational linguistics in order to automatically identify emotionally loaded words and an author’s emotional opinion in texts. It is based on a wide range of sources such as blogs, articles, forums, surveys and so forth.
Sentiment analysis can be a highly efficient tool in psychological warfare; this is evidenced by the considerable interest in similar technology shown by the heads of the United States Special Operations Command (SOCOM).
Prognostic weapons: Predicting people’s behaviour based on social network data
In 2012, within the framework of the US project Intelligence Advanced Research Project Activity (IARPA), an AI-based Early Model-Based Event Recognition Using Surrogates (EMBERS) program was launched to forecast civil unrest and to determine specific dates and locations along with the protesting population.
To do this, the system processes data from the media and social networks as well as higher-quality sources such as economic indicators. The prospects of terrorists gaining access to similar programmes are, of course, highly dangerous as well, as they will be able to carry out major terrorist attacks at widespread social protests or in the areas with the highest levels of social or psychological unrest.
The researchers suggest that prognostic weapons can be used by government and supranational bodies to combat social unrest through the timely adoption of social, economic and political measures aimed at achieving long-term stability.
Moreover, terrorist groups could also employ bots to cause reputational damage during political campaigns, recruit new supporters or organise political assassinations.
Seizing Control of Drones and Automated Infrastructure Facilities
Self-learning transport systems with AI-based management could be convenient targets for high-tech terrorist attacks. Terrorist control of transport management systems in large cities could lead to many deaths.
Commercial systems could be used to deploy drones or autonomous vehicles to deliver explosives and cause collisions. A series of large-scale disasters could lead to an international media frenzy, resulting in significant damage to psychological security.
RANEPA and Diplomatic Academy scientists based their study on the systematic analysis of the role of AI in the security sphere, scenario analysis, historical analogues and case analysis.
With Greg Simons, from Sweden’s Uppsala University, the researchers are the co-editors of the soon to be published book, Terrorism and Advanced Technologies in Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat, which was compiled from chapters written by researchers of eleven countries.
At the initiative of Russian researchers, the issue of AI in the context of a threat to IPS is now and will be a subject for discussion at various international conferences and international autonomous research workshops. The list of theses includes the recent conference under the Commission of the Russian Federation for UNESCO and a number of other Russian and international organizations that was held in Khanty-Mansiysk from 9 to 12 June .
The panel on the malicious use of AI, organised with support from the European-Russian Communication Management Network, was attended by Natalia Komarova, governor of the Khanty-Mansi Autonomous Area.
The list of upcoming conferences includes the 4th Iberoamerican Forum, which will be held in St. Petersburg on October 1-3, 2019, and the European Conference on the Impact of Artificial Intelligence and Robotics, on 31 October and 1 November 2019 in Oxford. For the last year the featured researchers have presented their academic papers in Argentina, Uruguay, Brazil, South Africa and Italy.
With the issue of AI and IPS becoming the problem of the near future, RANEPA and DA experts insist that developing long-term targeted programs must be a high priority for Russia.