{"id":2423,"date":"2018-12-01T13:33:10","date_gmt":"2018-12-01T18:33:10","guid":{"rendered":"https:\/\/ags.duke.edu\/?p=2423"},"modified":"2018-12-01T13:33:10","modified_gmt":"2018-12-01T18:33:10","slug":"the-ai-arms-race-is-here-exceptis-it-really-an-arms-race","status":"publish","type":"post","link":"https:\/\/ags.duke.edu\/2018\/12\/01\/the-ai-arms-race-is-here-exceptis-it-really-an-arms-race\/","title":{"rendered":"The AI Arms Race is Here. Except\u2026is it Really an Arms Race?"},"content":{"rendered":"

\"\"<\/p>\n

By Justin Sherman. December 1, 2018.<\/strong><\/p>\n

If you\u2019ve been following developments in AI over the past decade, you\u2019ve noticed that governments around the world have taken a growing interest in not just AI\u2019s potential benefits for domestic and global economies, but in its strategic importance as well.<\/p>\n

\u201cThe debate over the effects of artificial intelligence has been dominated by two themes,\u201d wrote Nicholas Wright in a July article<\/a> for Foreign Affairs<\/em>. \u201cOne is the fear of a\u00a0singularity<\/a>, an event in which an AI exceeds human intelligence and escapes human control, with possibly disastrous consequences. The other is the worry that a\u00a0new industrial revolution<\/a>\u00a0will allow machines to disrupt and replace humans in every\u2014or almost every\u2014area of society, from transport to the military to healthcare.\u201d<\/p>\n

These are not the only two themes on which policymakers should focus, as Wright argues in his piece\u2014but they are what tend to dominate the policy-level conversations. \u201cArtificial intelligence is the future, not only for Russia, but for all humankind,\u201d Vladimir Putin famously said<\/a> in 2017. \u201cWhoever becomes the leader in this sphere will become the ruler of the world.\u201d World leaders are investing in artificial intelligence with the perception that a race to \u201cbetter\u201d AI will determine the future world order.<\/p>\n

Certainly, there is some merit to these claims. Artificial intelligence is already used in assorted military applications<\/a> and has benefited society in many non-military fashions, such as improving cancer detection<\/a>. The future benefits of such technologies are attractive and likely disproportionate, as countries automate supply chains<\/a>, enhance kinetic warfare<\/a>, and bolster foreign influence operations<\/a>. But some experts have also argued that it\u2019s not an arms race, or that framing it that way is problematic.<\/p>\n

Elsa Kania, writing<\/a> for Defense One<\/em>, has said that \u201cAI is not a weapon, nor is \u2018artificial intelligence\u2019 a single technology but rather a catch-all concept alluding to a range of techniques with varied applications in enabling new capabilities,\u201d and that \u201cthe concept of an \u2018arms race\u2019 also doesn\u2019t capture the multifaceted implications of the AI revolution.\u201d<\/p>\n

Yoshua Bengio, one of the champions of deep learning<\/a>, recently told<\/a> MIT Technology Review<\/em> that he doesn\u2019t like the framing of AI development as a race. \u201cWe could collectively participate in a race,\u201d he said, \u201cbut as a scientist and somebody who wants to think about the common good, I think we\u2019re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible.\u201d<\/p>\n

Michael C. Horowitz has argued<\/a>\u00a0in Foreign Policy<\/em> that even if you accept the policy-level emphasis on an AI \u201carms race,\u201d that framing still has drawbacks. \u201cBecause AI is a\u00a0general purpose\u00a0technology\u2014more like the combustion engine or electricity than a weapon\u2014the competition to develop it will be broad, and the line between its civilian and military uses will be blurry,\u201d he wrote. \u201cThere will not be one exclusively military AI arms race. There will instead be many AI arms races, as countries (and, sometimes, violent nonstate actors) develop new algorithms or apply private sector algorithms to help them accomplish particular tasks.\u201d<\/p>\n

So, is it really an AI arms race? To some extent, yes, because world leaders clearly see it that way; and many countries are, in a very real sense, \u201cracing\u201d to develop \u201cbetter\u201d artificial intelligence (whatever \u201cbetter\u201d means). But, as these and other analysts have shown, there are some notable problems with framing AI development as an arms race, or even as a single arms race.<\/p>\n

I don\u2019t have an answer to how to best reframe the conversation\u2014but I do know that in executing that reframing, we should also look to other technologies whose development is characterized like that of AI, such as quantum computing<\/a> and sophisticated biotechnology<\/a>.<\/p>\n

Just like the many technologies that compose our notion of artificial intelligence, these technologies have many properties\u2014from the speed and scale of their development to the mechanisms of their diffusion\u2014that make their strategic impact different from that of older technologies like tanks in WWI or even nuclear weapons during the Cold War. (Quantum computing, for instance, might break all Internet encryption<\/a>, which means the development of a powerful quantum computer could have alarmingly quick effects.) But in focusing too much on the \u201carms race\u201d dynamic, we can lose important nuance.<\/p>\n

Quantum computing research is heavily dispersed across corporations and academic institutions in addition to government labs. Biotechnology promises the development of \u201clife-saving or other advanced tools for warfighters,\u201d Diane Dieuliis just wrote<\/a> for War on the Rocks<\/em>\u2014not just the creation of security risks. The list goes on, but I think the point is clear: AI development might be framed as an arms race, but we should look to other contemporary technologies to help reframe that dialogue. Since artificial intelligence will continue to radically change the world as we know it, it\u2019s in all of our best interests.<\/p>\n


\n

Justin Sherman is a junior double-majoring in computer science and political science and the Co-Founder and President of the Duke Cyber Team. He is a Fellow at\u00a0<\/i>Interact<\/i><\/a>; the Co-Founder and Vice President of\u00a0<\/i>Ethical Tech<\/i><\/a>; and a Cyber Policy Researcher at the\u00a0<\/i>Laboratory for Analytic Sciences<\/i><\/a>. He has written extensively on cyber policy and technology ethics, including for Journal of Cyber Policy, Defense One, The Strategy Bridge, and the Council on Foreign Relations.<\/i><\/p>\n","protected":false},"excerpt":{"rendered":"

By Justin Sherman. December 1, 2018. If you\u2019ve been following developments in AI over the past decade, you\u2019ve noticed that governments around the world have…<\/p>\n