Press "Enter" to skip to content

The AI Arms Race is Here. Except…is it Really an Arms Race?

By Justin Sherman. December 1, 2018.

If you’ve been following developments in AI over the past decade, you’ve noticed that governments around the world have taken a growing interest in not just AI’s potential benefits for domestic and global economies, but in its strategic importance as well.

“The debate over the effects of artificial intelligence has been dominated by two themes,” wrote Nicholas Wright in a July article for Foreign Affairs. “One is the fear of a singularity, an event in which an AI exceeds human intelligence and escapes human control, with possibly disastrous consequences. The other is the worry that a new industrial revolution will allow machines to disrupt and replace humans in every—or almost every—area of society, from transport to the military to healthcare.”

These are not the only two themes on which policymakers should focus, as Wright argues in his piece—but they are what tend to dominate the policy-level conversations. “Artificial intelligence is the future, not only for Russia, but for all humankind,” Vladimir Putin famously said in 2017. “Whoever becomes the leader in this sphere will become the ruler of the world.” World leaders are investing in artificial intelligence with the perception that a race to “better” AI will determine the future world order.

Certainly, there is some merit to these claims. Artificial intelligence is already used in assorted military applications and has benefited society in many non-military fashions, such as improving cancer detection. The future benefits of such technologies are attractive and likely disproportionate, as countries automate supply chains, enhance kinetic warfare, and bolster foreign influence operations. But some experts have also argued that it’s not an arms race, or that framing it that way is problematic.

Elsa Kania, writing for Defense One, has said that “AI is not a weapon, nor is ‘artificial intelligence’ a single technology but rather a catch-all concept alluding to a range of techniques with varied applications in enabling new capabilities,” and that “the concept of an ‘arms race’ also doesn’t capture the multifaceted implications of the AI revolution.”

Yoshua Bengio, one of the champions of deep learning, recently told MIT Technology Review that he doesn’t like the framing of AI development as a race. “We could collectively participate in a race,” he said, “but as a scientist and somebody who wants to think about the common good, I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.”

Michael C. Horowitz has argued in Foreign Policy that even if you accept the policy-level emphasis on an AI “arms race,” that framing still has drawbacks. “Because AI is a general purpose technology—more like the combustion engine or electricity than a weapon—the competition to develop it will be broad, and the line between its civilian and military uses will be blurry,” he wrote. “There will not be one exclusively military AI arms race. There will instead be many AI arms races, as countries (and, sometimes, violent nonstate actors) develop new algorithms or apply private sector algorithms to help them accomplish particular tasks.”

So, is it really an AI arms race? To some extent, yes, because world leaders clearly see it that way; and many countries are, in a very real sense, “racing” to develop “better” artificial intelligence (whatever “better” means). But, as these and other analysts have shown, there are some notable problems with framing AI development as an arms race, or even as a single arms race.

I don’t have an answer to how to best reframe the conversation—but I do know that in executing that reframing, we should also look to other technologies whose development is characterized like that of AI, such as quantum computing and sophisticated biotechnology.

Just like the many technologies that compose our notion of artificial intelligence, these technologies have many properties—from the speed and scale of their development to the mechanisms of their diffusion—that make their strategic impact different from that of older technologies like tanks in WWI or even nuclear weapons during the Cold War. (Quantum computing, for instance, might break all Internet encryption, which means the development of a powerful quantum computer could have alarmingly quick effects.) But in focusing too much on the “arms race” dynamic, we can lose important nuance.

Quantum computing research is heavily dispersed across corporations and academic institutions in addition to government labs. Biotechnology promises the development of “life-saving or other advanced tools for warfighters,” Diane Dieuliis just wrote for War on the Rocks—not just the creation of security risks. The list goes on, but I think the point is clear: AI development might be framed as an arms race, but we should look to other contemporary technologies to help reframe that dialogue. Since artificial intelligence will continue to radically change the world as we know it, it’s in all of our best interests.


Justin Sherman is a junior double-majoring in computer science and political science and the Co-Founder and President of the Duke Cyber Team. He is a Fellow at Interact; the Co-Founder and Vice President of Ethical Tech; and a Cyber Policy Researcher at the Laboratory for Analytic Sciences. He has written extensively on cyber policy and technology ethics, including for Journal of Cyber Policy, Defense One, The Strategy Bridge, and the Council on Foreign Relations.