Blog: The Modern Pen and the AI Sword – Harvard Political Review
The Modern Pen and the AI Sword
By Kendrick Foster | May 13, 2019
“Whoever said the pen is mightier than the sword obviously never encountered automatic weapons.” The modern version of that statement, apocryphally attributed to World War II General Douglas MacArthur, might today read, “Whoever said the pen is mightier than the sword obviously never encountered artificial intelligence.”
Weapons incorporating AI have undeniably grown more powerful, sparking fears of what they could one day become. Experts have warned against AI’s military applications since 2007, and more than 2,000 AI researchers signed a pledge against creating robots that could make independent determinations on who to kill just last year. The researchers grimly warned, “We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
While Star Wars-esque battle droids or ‘killer bots’ have not yet entered the modern military arsenal, AI-enhanced weapons have. In fact, AI’s uses in the military sphere extend beyond weapons, from training programs to logistical support. As the United States responds to increasing AI-enhanced weapon use by countries like Russia and China, it must take precautions to avoid creating killer bots and to keep civilian leaders in control of AI-based projects.
AI: The “Enabler”?
While AI might not be a weapon in its own right, it acts as an “enabling” technology that makes other wartime tasks easier. AI also provides a foundation for the development of fully autonomous weapons systems that could absorb data, process it, and make decisions based on that data without human intervention. Currently, fully autonomous systems do not exist, and existing semi-autonomous systems still require human control. For now, the threat of robots killing humans by themselves remains more science fiction than reality.
As detailed by the U.S. Army’s Robotic and Autonomous Systems Strategy, autonomous or semi-autonomous systems present numerous benefits to even the world’s most advanced militaries. Autonomous vehicles “reduce the number of warfighters in harm’s way” and “perform missions impossible for humans,” while algorithms also “increase decision speed in time-critical operations” by analyzing large amounts of data quickly and accurately. By keeping track of threats in battle, AI helps humans keep up with the rapidly changing threat atmosphere, enabling them to fire on the right targets.
Technopolises, Centaurs, and Neural Networks
“Artificial intelligence is the future, not only for Russia, but for all humankind,” Russian President Vladimir Putin declared in 2017. “Whoever becomes the leader in this sphere will be the ruler of the world.” With Russia and China making significant progress in developing their own AI technologies, it seems that the “race for AI supremacy and AI hegemony” is already on, explained Phillippe Lorenz, head of German think tank SNV’s Artificial Intelligence and Foreign Policy project, in an interview with the HPR. Indeed, these two major adversaries of the United States have taken notable steps to design technologies that will give them an edge over American defense systems; the United States is “trying to keep its military edge” against these two powers, especially China.
As Sam Bendett, a research analyst at the Center for Naval Analysis and a fellow in Russia studies at the American Foreign Policy Center, noted in an interview with the HPR, Russia is already developing a gamut of AI-enabled autonomous machines, from logistical vehicles and minesweepers to actual combat robots. Russia has also developed a missile that can “determine its [own] direction, altitude, and speed,” Russian General Viktor Bondarev reported to Russia’s national newspaper Rossiyskaya Gazeta.
Already, the Russians have battle-tested several semi-autonomous vehicles on the Syrian battlefield, including the Uran-6 and Uran-9 vehicles. Although the two “technically failed” according to Bendett, the Russians have started “incorporating the lessons learned from that failure into the future generation of ground vehicles.” After working out the kinks in its existing AI technology, Russia will be well prepared for the next generation of conflicts. Perhaps the most dangerous result of Russia’s focus on AI development is its creation of a dedicated innovation infrastructure. Bendett noted that an AI breakthrough could come from any number of locations: military research centers, universities, or the private sector — though Bendett even mentioned the possibility of creating a “technopolis” dedicated to AI research.
But even Russia’s technological progress in AI pales in comparison to China’s. China views AI as integral to its national defense strategy, and its military has devoted significant resources to outdoing the United States in this critical area. China has undertaken important research in artificial neural networks, which reports have indicated it intends to introduce in submarines in order to interpret sonar data more easily, reducing the mental burden on commanders. It has also taken steps to incorporate neural networks into hypersonic missiles, making these weapons, which are designed to bypass existing American defenses, even more dangerous.
China has also explored the concept of the “centaur,” a weapon combining artificial and human intelligence. This concept first emerged in chess when Garry Kasparov, a human grandmaster, lost to Deep Blue, an IBM computer. Kasparov’s loss sparked a question: If a computer with AI could defeat him, what could that computer and a human brain achieve by working together? As it turns out, quite a lot. Centaurs, which combine human intuition and automated logic, beat human grandmasters and computers alike.
In the military realm, humans can deal with problems like inaccurate intelligence and deception in ways that AI cannot, while AI can help commanders make decisions and analyze data more quickly. Forming a centaur in which the human asks the key questions and AI helps to answer them allows both parties to solve problems more effectively than either one could alone. Chinese scientists have also begun researching brain-computer interface technologies to allow humans to control autonomous vehicles more effectively.
Much of the growth in Chinese AI capabilities stems from a uniquely Chinese regulation. As Lorenz explained, “Each and every technological development coming out of the private sector [is] subject to the military sector as well.” As a result, the Chinese military can acquire new technology quickly and cheaply.
Finding The AI Holy Grail
Partly in response to these Russian and Chinese developments, the United States has already introduced several new technologies in the AI field. The 2014 Third Offset Strategy set out the Pentagon’s plan to bolster its eroding technological advantage by acquiring more advanced autonomous vehicles, incorporating algorithms in intelligence gathering and analysis, and developing centaurs, the military’s “high-tech holy grail.”
While the Pentagon simply delineates these broad goals, its component services have developed more concrete timelines for adopting AI technology. The 2017 Army Robotic and Autonomous Systems Strategy detailed the U.S. Army’s priorities regarding autonomous vehicles, which focus on improving support vehicles and data analytics before developing fully autonomous weapons. Building on the themes from the RAS Strategy, the 2018 Department of Defense AI strategy commits to bettering decision-making and logistics systems through partnerships with the private sector.
Technologically, major defense company Lockheed Martin is currently working to develop new autonomous and semi-auton- SUMMER 2019 HARVARD POLITICAL REVIEW 23 UNITED STATES omous systems “because [they] recognize that the question isn’t just about who’s the best person for the job — it’s about what’s the best team for the mission,” a spokesperson for Lockheed Martin told the HPR. Their Autonomous Mobility Applique System helps armored vehicle drivers to process intelligence, while the Squad Missions Support System vehicle aims to help soldiers with logistics, freeing up troops for other tasks. The leader-follower system, with humans driving a front vehicle followed by several autonomous vehicles, works to do the same while increasing security for drivers. Lockheed has already tested these systems and plans to implement them in the next few years.
Another Lockheed innovation, the Long-Range Anti-Ship Missile, which has already entered service in the U.S. Navy, exemplifies the centaur concept. When humans select a target, the LRASM uses algorithms to calculate the most effective way to avoid the enemy’s defenses and sink its target. “Precision lethality against surface and land targets ensures the system will become an important addition to the U.S. warfighter’s arsenal,” the Lockheed spokesperson said. Lockheed’s competitor, Northrop Grumman, also incorporated the centaur concept in its counterrocket, artillery, and mortar system: AI performs the “essential task” of targeting incoming enemy fire while humans act as both a “fail-safe” and a “moral agent,” two key human roles within the centaur system, as Paul Scharre noted in his seminal book on autonomous weapons Army of None.
Drone swarms are also a key emerging technology in the military. According to Mark Peters, the research compliance officer at Oregon State University’s drone program, OSU is currently researching “how a swarm of drones may work like a swarm of starlings or a swarm of honeybees.” First, these drone swarms could assist in precision targeting. They “will be programmed to identify a tank, and when you get a critical mass of, say, five units that identify that tank, then the military could deploy a precision-guided weapon to those exact coordinates,” Peters explained to the HPR. Second, swarms will also be useful in search-and-rescue efforts. Rescuers, Peters said, will “be able to use multi-platform aerial, land, and water drones to provide a three-dimensional map to identify where they need to go.” Already, the Air Force has tested Perdix drone swarms, which use collective intelligence to perform reconnaissance while avoiding defensive systems. Meanwhile, the Navy has tested swarm intelligence with unmanned boats to protect harbors or escort friendly ships.
The military has also increasingly adopted AI technology off the battlefield. The controversial Project Maven used AI to analyze satellite images for drone targets, and AI has also been used to mimic real-world adversaries in order to help train fighter pilots and more accurately simulate enemy movements in war games. Meanwhile, the Air Force and Army have started to integrate predictive maintenance algorithms in their vehicles to anticipate mechanical breakdowns and fix them more quickly, and the military has started planning to automate tasks like warehouse management and report analysis.
In the cyber realm, AI-enabled machines have proven capable of exploiting vulnerabilities in computer networks. AI can also improve cyber defense by helping to probe for those vulnerabilities and monitor software for potential intrusions.
AI’s Political Future
At the moment, the military’s use of AI does not seem to be a pressing political issue. Neither major political party has a developed policy platform around autonomous weapons use. Likewise, according to a Brookings Institute online poll, one third of Americans do not know if they want AI developed for use in warfare, making it unclear which AI policies they would support.
However, with the Chinese, Russian, and American militaries all rapidly integrating AI into their defense systems, AI will likely become a more salient issue in American politics. Based on Republican and Democratic positions on drone strikes, Republicans seem more likely to promote AI weapons systems, while Democrats may want to establish more controls and regulations before sanctioning autonomous systems.
For now, the two parties have agreed to conduct more research on AI. In 2018, Congress passed a bipartisan measure to establish the National Security Commission on Artificial Intelligence, which will review current uses for military AI and avenues for future growth.
While the NSCAI goes a long way to promote further research, the United States must develop more institutions if it aims to preserve civilian control over military AI applications. Such institutions would provide checks and balances to military AI use while reassuring the American public that humans always remain in the military decision-making process.
Congress could expand the NSCAI’s mandate to develop adequate safety standards for AI and provide oversight over military AI acquisitions, while internal military institutions could be established to advocate for AI safety in bureaucratic politics. The United States could also help create international oversight organizations to develop general standards for military AI use, establish new frameworks for the interaction between AI and international humanitarian law, and ensure meaningful human certification of potential autonomous systems.
Preventing AI from usurping human control also requires the military to follow through on its promises in the RAS Strategy to keep humans responsible for making the ultimate decision on military actions. A civilian institution such as an expanded NSCAI could enforce these promises and exercise oversight if the military were to exceed its limits.
As the AI field sees vast developments in technical capacity, it seems that American politics will have to grapple with challenging questions around the appropriate response to such developments. Regardless of political parties’ differing views, it seems clear that to satisfy the American public, the United States must develop responsible institutions to manage its AI development, especially given the growing threats from Russia and China. In short, the institutional pen must remain mightier than the AI sword. If it does not, the world will face a future reality closer to today’s science fiction.
Image Credit: Unsplash/Gertrūda Valasevičiūtė//Wikimedia Commons/ZStoler