Blog: What if AI is just BS? – Washington Post
Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University and a regular contributor to
May 1 at 7:00 AM
The hard-working staff here at Spoiler Alerts has been attending way too many conferences in the past half-year or so. Big conferences with thousands of political scientists. Small conferences with just a few political scientists. Posh conferences with lots of management consultant types and I am the academic brought in for the sake of intellectual diversity. There has been one constant running through all of them: people who want to sound savvy keep talking about artificial intelligence as the New New Thing.
This is what you read in the popular press as well. There are lots of ways that AI could affect the social fabric: there is the potential of lost jobs, or at least a radical reorientation of what jobs would look like. There are the unexpected effects of artificial intelligence, which I believe the sci-fi genre has tackled with a great deal of enthusiasm. And for my bailiwick of international relations, there is a lot of talk about an AI “arms race” that could alter the balance of power in the future.
Are these people correct? I am legitimately unsure, but I confess to wariness about claims of technological game-changers. All too often, I hear colleagues reference AI the way that they would reference “globalization” or “Big Data” — terms so amorphous that there is no consensus about the definition.
On that question and many others, I strongly recommend perusing Michael Horowitz’s essay in the Texas National Security Review, which makes some very useful distinctions. Horowitz points out that AI is more of a continuum than a precise technology. He also acknowledges that the future of AI is far from clear. He writes, “even experts disagree about whether artificial general intelligence of the type that could outpace human capabilities will emerge in the short to medium term or whether it is still hundreds of years away. AI experts also disagree about the overall trajectory of advances in AI.” For Horowitz, one of the key questions is whether the engine of innovation comes from the private sector (because of AI’s utility as a general purpose technology) or the defense sector (because of AI’s utility for the military).
I have some skin in this game, because I wrote something about technological change and international relations for the centennial anniversary of the journal International Relations that was just published. My interest is more in whether AI is a technology that gets standardized pretty quickly, or whether it requires massive fixed costs to develop properly. If it is the former, then industrial policies do not matter much, because the technology will diffuse rapidly. First-mover advantages might matter in terms of standard-setting but not much else. If it is the latter, however, then industrial policy could matter a great deal, with all sorts of unpleasant international implications.
Another problem that AI exemplifies is the ways in which old metaphors are applied to news technologies — sometimes inaccurately. In the Bulletin of the Atomic Scientists, Heather Roff has an interesting essay on this very question, in which she notes, “It would help matters if artificial intelligence discussions were framed in an ‘AI +’ framework, because in many cases, AI is merely a tool included in a system involving other functions or capabilities. The news media should stop framing the global artificial intelligence competition as an ‘arms race.’ This misrepresents the competition going on among countries.”
I am old enough to remember all the times that a new technology was declared to be the New New Thing. In the late 1980s, it was high-definition television. In the 2000s, it was nanotechnology or biotechnology. AI is the big thing now, unless it is 5G.
None of this means that AI is not a significant technology. But it does mean that very often the people proclaiming that are selling you something.