Press "Enter" to skip to content

2018 Summer Intern Fellow | Justin Sherman

2018 Summer Intern Fellow:

Justin Sherman | 
New America’s Cybersecurity Initiative
Justin SermanEnd of Summer Reflection
August 2018Summary: I spent the summer interning in think tank New America’s Cybersecurity Initiative in Washington, D.C. It was an incredibly enjoyable and valuable experience, and I am grateful to the Duke Program in American Grand Strategy for assisting me in making it a reality.

Discussion: First and foremost, I learned about think tanks and how they function. Many are unaware of exactly what think tanks do, beyond the vaguely formed notion of “policy”—and, perhaps, getting paid to just…think. I, however, had the opportunity to see their effectiveness firsthand: how think tanks fill a void between government, academia, and industry. Especially when it comes to digital technology (a unique focus of New America’s), there is a desperate need to bring together experts from a variety of backgrounds—whether software development or journalism, economics or statistics—to deliver policy recommendations on issues that federal policymakers (e.g., Congressional representatives, etc.) may be unequipped to handle on their own (which they all too often are). As a young person in college at a time in which hybrid work environments are increasingly common—and when tenures at a single organization are often less than a decade—I believe think tanks and the recommendations they deliver are a critical asset to the effective functioning of the United States’ enormous policy apparatuses. This is particularly true for issues of cybersecurity and cyber policy more broadly.

At New America, I had the fortune of working with a tremendous team that recognized my prior experience (research, publishing, etc.) and allowed me to work on several exciting projects!

I worked with Ian Wallace, director of the Cybersecurity Initiative, on research about so-called next-generation cyber strategies—studying how liberal-democratic nation-states are modifying their Internet (and internet) protocols and policies to balance network openness with network security and a responsibility for citizen “harm reduction.” This research is still being processed and may be turned into a forthcoming paper.

I worked with Robert Morgus, Senior Policy Analyst, to co-create the first robust framework for liberal-democratic policymakers to understand the global Internet. The paper, “The Idealized Internet vs. Internet Realities (Version 1.0),” introduces our novel framework and includes substantive discussion of its relevance to the liberal-democratic world. For this work, I compiled hundreds of pages of research on cyber strategies—and I say that encompassing everything from cybersecurity protocols to censorship laws to digital development goals—for the United States, United Kingdom, Canada, Australia, Spain, Italy, Germany, France, Greece, Israel, China, Russia, Iran, IEEE, ICANN, ITU, and other countries/organizations. Our paper was featured in POLITICO and Morning Consult and tweeted by numerous national security and foreign policy experts. Attached to this paper, Robert and I wrote for New America Weekly and Pacific Standard about the tensions in liberal-democratic Internet strategies.

I worked with Laura Bate, Policy Analyst, and Elizabeth Weingarten, Editor of Humans of Cybersecurity, in support of the Humans of Cybersecurity project, a platform that works to bolster the diversity of the cybersecurity workforce. Under this umbrella, I conducted several written Q&As with diverse voices in cyber: I interviewed Dr. Lydia Kostopoulos, future of war strategist, on her AI artwork and the topic of technological convergence in warfare; I interviewed Jane Frankland, security executive and author of “INSecurity: Why a Failure to Attract and Retain Women in Cybersecurity is Making Us All Less Safe,” about the growth of the cybersecurity industry and her work to empower diversity in the field; and I interviewed Megan Stifel, President Obama’s NSC Director of International Cyber Policy, on lessons of “sustainable cybersecurity” and her recent paper for consumer advocacy group Public Knowledge. For the Humans project, I also wrote an op-ed about the need for a general “Tech 101” college class, for which I interviewed faculty at the University of Oxford and Florida International University, and I compiled a list of resources for “finding” women in cyber, for which I interviewed numerous experts in government, academia, and industry. The latter piece was republished in Homeland Security Today and heavily shared by those in the cyber and national security communities.

Finally: Robert Morgus and I wrote a policy primer on quantum computing for the Council on Foreign Relations, and I had the fortune of supporting some of New America’s cutting-edge work in cybersecurity apprenticeship. It was truly an amazing and educational summer with the think tank.

 

Why Digital “Inovation” Doesn’t Automatically Mean “Good”
7/6/2018

In philosophy and psychology, the naturalistic fallacy refers to an often incorrect inference: that because something is natural, it is therefore good or morally acceptable. Marketers use this to their advantage (e.g., think organic food products), and it also influences our personal lives. A similar phenomenon occurs with digital technology. Just as we tend to romanticize Silicon Valley as a progressive utopia focused on social good, we also tend to classify any and all digital innovation (and “innovation”) as inherently positive.

When the Internet went global, it was perceived — by many Western democracies — as an inherently democratic system that pushed free, open discourse regardless of end user. When smartphones went mainstream, they were heralded as uplifters of all persons across societies, from the rural farmer in a developing country to the businessman in an urban metropolis. Even modern advancements in machine learning, the blockchain, and quantum computing are often portrayed as purely-positive game-changers.

As with any element of society, the best policymakers are — and will be — the ones who realize that not all cyber innovation is inherently good, and not all digital technology is going to revolutionize humanity for the better. The approach is best when practical, not idealistic.

The Internet has enabled the spread of hate speech, malware, disinformation, and child pornography alongside free press. Smartphones have caused disruptions to sleep patterns and possible addiction in teenagers just as much as they have enhanced global communication. Machine learning carries with it enormous bias that can, for instance, disparately sentence black and Hispanic men to longer stays in prison; blockchain systems have produced, in some cases, extremely adverse impacts on the global climate; and quantum computing threatens to break all public key encryption that holds the Internet together. And this is barely scratching the surface of the ethical issues that come with tech innovation and respective policies.

We should not be technophobic — not by any means — but we don’t need policymakers in shock that Facebook  disrespected user privacy, either. Thus, we can no longer afford to teach the leaders of tomorrow — in elementary school, middle school, high school, college, and beyond — only about the purely beneficial sides of technology. We must teach about security and ethics; we must incorporate discussions of mental and bodily health; we must evaluate digital innovation’s impacts on climate change, political stability, and social justice. In order to prepare tomorrow’s leaders for the cyber challenges we face, education must accept and address that not all cyber “innovation” is inherently good.

We need pragmatic policies towards innovation.

 

Cyber Inertia: Destruction by Stagnant Thought
6/20/2018

For years, the cybersecurity industry — and, more broadly, the field of cyber strategy — has suffered from a serious bout of inertia. That is, while many great thinkers have done much to advance the field, many more remain firmly planted, holding the same positions and ways of thinking that they have for decades.

This is highlighted by many thoughtful articles; this was highlighted in the New York Cyber Task Force’s report on leverage, which found that organizations are developing innovative technologies yet failing to change the fundamental, asymmetric advantage held by attackers; and this is highlighted in my forthcoming conversations with cybersecurity executives and senior cyber strategists who say the same. Inertia of thought is further evidenced by a mere examination of how “cyber” itself is treated: as its own discipline, often locked away within the computer or information sciences, never to make contact with academic coursework in ethics or business or healthcare. And private-sector organizations are just now waking up to the notion of human-centered design, despite its long history in the startup world.

Rather than complain about this issue, we as a society — meaning state and federal governments, schools and universities, and private-sector corporations — need to fight this inertia by empowering and encouraging diverse thinking.

First, the government must stop treating cybersecurity as the purview of just “cyber people,” a point that future of war strategist Lydia Kostopoulos highlighted in our recent interview. While the U.S. military view of cyber as a domain is perhaps an easy “out,” it seriously hampers the ways in which strategists and key decision-makers discuss cyberspace itself. There are challenging jurisdictional questions that must be answered, yes — such as the division of authority between NSA and CYBERCOMM, or deciding whether DHS or DOE has authority over protecting critical infrastructure — but that doesn’t excuse the segmentation and isolation of cyber discussions. This is especially an issue at state and local levels of government.

Second, educational institutions must dedicate resources to teaching cyber, and not just through the lens of computer and information science. As I recently argued, all students — from business to policy to healthcare to media — need a “Tech 101” education that prepares tomorrow’s leaders to face the challenges of digitization. Looking to cybersecurity in particular, we not only need awareness beyond the circle of developers and hackers that maintain security in code; we also need diverse individuals to enter the field in the first place. This simply cannot happen without appropriate coursework in elementary schools, middle schools, high schools, and colleges, or without certificate programs that provide alternative forms of learning. As New America’s Laura Bate has written, “for scalable solutions to the cybersecurity workforce shortage, the U.S. government will need to look beyond just higher education.” Diverse teaching will empower diverse thinking — fighting this cyber inertia.

Third, organizations must work harder to hire more diverse people. The field remains extremely homogeneous, as anyone who has ever stepped foot in a conference or cybersecurity workplace can tell you, and there is clear data that this lack of diversity is making us less safe. Different people handle risk in different ways, which means they think about cyber differently — again, thrusting against the inertia that keeps cybersecurity conversations so stagnant. Organizations must therefore take clear steps to hire diverse individuals, looking to such groups as “Women in Homeland Security” and “Help a Sister Up” or such events as Europe’s first all-female cybersecurity conference. If we want better strategies and policies around cyberspace, hiring different types of people (really, anyone outside the current frame of thinking) is a necessary step forward.

We will never attain total security in cyberspace, as such a state doesn’t exist. However, we can fight the inertia of thought we currently face — and it starts with bringing in new thinkers who will challenge existing assumptions.

 

It’s All About the (Cyber) Semantics
6/6/2018

Cyber, herein referring broadly to the digital and online space, does not operate in isolation from “conventional” elements that affect foreign policy. Geopolitical economy has a direct role in shaping the physical infrastructure behind the Internet, which in turn impacts everything from browsing speeds to content censorship. Philosophical works on deterrence and honor still hold enormous value in the digital era. And as the last two weeks have already shown me, the same goes for semantic understanding.

To use a demonstrative anecdote: I’m reminded of Duke University’s 2018 Winter Forum, “Crisis Near Fiery Cross Reef,” during which Georgetown’s Dr. Oriana Mastro gave a fascinating talk on the thought process (and actual logistics) behind Chinese military decision-making. Chinese military leadership, Dr. Mastro discussed, sees deterrence in quite a different way than their American counterparts — which, perhaps obviously, leads to some tangible misunderstandings in the international arena. Now, these misunderstandings of course have their enormous complexities (which I am not qualified to fully understand myself), but they are in some way caused by semantics: two nation-states using the same terminology, but thinking and meaning fundamentally different things.

Cyber is not exempt from this reality. Much of the West thinks of “information security” as the ability to ensure the confidentiality, integrity, and availability (CIA) of information; the tech community even uses the abbreviation “InfoSec” in this regard. Encryption, hashing, and data segmentation are just some of the techniques that fall under this “information security” umbrella, as are standards compliance, breach reporting, and crisis management.

Despite our enormous reliance on what many to assume to be a technically-objective definition, other nation-states do not hold the same exact understanding. Perhaps most notably, Russia sees “information security” in a different light — related to the government’s ability to control the flow of information (e.g., as they do with television) in order to maintain national sovereignty and political order. While data security is encapsulated in this idea, it arguably refers more to censorship, surveillance, and control of the Internet than anything else; its meaning isn’t just technical and operational, but philosophical and deeply political as well. What many think of as a clear term, it turns out, is quite semantically ambiguous.

The same semantic issues occur with other powerful nation-states like China, whose ideas of “cultural security” and “innovation security” might not resound with the West as-is, let alone when taken in a cyber context; these challenges arise when trying to translate English cyber terminology into other languages; they even occur within our own country, where debates over the difference between cybersecurity, cyber-security, and cyber security are quite contentious.

Cyberspace is not immune from semantics, and just as two physicians should be on the same semantic page when discussing a patient, cyber strategists need to think more carefully about the words they use and try to come to consensus definitions. Because as nation-states begin to develop their international cyber strategies and their domestic cyber laws, such as with Russia and China’s 2015 International Code of Conduct for Information Security, we’re going to need to speak in cyber terms without losing total meaning. This is just another reason for collaboration and consensus-building in the digital era.

(We also need to teach students more about this: hence my first article for New America’s Cybersecurity Initiative, entitled “Colleges, It’s Time for a General Technology Class.”)