By Justin Sherman. October 18, 2018.
Today, the Duke Center on Law, Ethics, and National Security (LENS) hosted a discussion with Monika Bickert, Head of Global Policy Management and Counterterrorism at Facebook, and Tom Bossert, President Trump’s former Homeland Security Advisor. They discussed how governments and social media platforms can and do fight digital threats.
Discussions of a “threat” are plagued by rhetorical issues that consequently impact policy decisions; former Director of National Intelligence (DNI) James Clapper discussed this last year in a talk for the American Grand Strategy Program, where he explained why he dislikes the question “what is the #1 threat we face?” While one threat may be “worse” than all others (and this isn’t always the case), he said, the severity of threat #1 in no way means you can ignore threat #2, threat #3, and so on—but by ranking threats on a clear priority scale, it can sometimes feel like that. Similarly, Tom Bossert immediately addressed a challenge of discussing digital threats.
“When people talk about threats, they tend to talk about consequences and that gets into vulnerabilities and risk,” he said. “So are there people out there that want to cause us harm? Yes. But in terms of turning the lights out, you have to be pretty sophisticated, and if you’re an actor like Russia, there are a lot of other factors that inform your decision.” In other words, the presence of a threat (e.g., a hack of critical infrastructure) in and of itself says virtually nothing about its severity and likelihood of occurrence, which are informed by everything from an adversary’s technical capabilities to the complex bureaucratic, strategic, and cognitive decision-making processes that might inform said threat actor’s decision calculus.
This laid important foundation for the moderated discussion, which quickly turned to the regulation of fake news on social media platforms. “It’s not the job of a company to decide what is true and what is false,” Monika Bickert said. From legal questions of censorship to technical questions of how a machine learning model comes to understand the truth (e.g., what is labeled as ground truth?), the social media companies are having to address questions which even the world’s foremost technology scholars, policymakers, and innovators are themselves still struggling with.
Even the notion of a fact itself comes into question, Peter W. Singer and Emerson T. Brooking raise in their new book LikeWar: The Weaponization of Social Media, when there is no visible consensus on what is accepted as true. (And I say “visible” here because the presence of bots, trolls, and the like can artificially amplify the presence of certain questions and narratives online in ways that are not representative of actual humans’ beliefs—but in ways that can also begin to dangerously shape how people do think.)
“It’s a cops-and-robbers game,” Bickert said, “because if there’s recurring behavior, we know how to stop the threat—but that means we constantly have to stay ahead.” Fake news, in this way, is similar to network and system security threats in that both run up against automated mechanisms to identify and filter out the “bad.” As a result, systems have some degree of dependency on what’s already happened. (That is, at least, if or until predictive AI becomes more robust.) Nonetheless, Ms. Bickert discussed, she and her team work constantly to understand future threats.
“When it comes to stopping misinformation, what we’re doing right now—and this is by no means where we will settle—is we’re trying to provide people with accurate information about who is speaking, so they’re not thinking they’re hearing from an American veterans group when they’re actually hearing from the Russians,” she said.
Of course, one item that went unmentioned during this discussion of fake news was the ways in which Facebook already manipulates consumer behavior from an economic perspective—which is how the company makes its money. It’s worth asking: While there are evidently ethical qualms on the organization’s part about the manipulation of political behavior and voting outcomes, why is there no parallel discussion about the ways in which economic behavior is manipulated? As I wrote several weeks ago, “Social media companies like Facebook make their money through technology that can predict and manipulate human behavior, which means that weakening this technology runs in tension, in many ways, with their own incentive structures—diminishing this capability for third-party, malicious actors while trying to still enhance that capability for the company.”
Many other topics came up, including privacy, censorship, Mr. Bossert’s role in the Trump administration, and how Ms. Bickert went from prosecuting federal crimes to fighting terrorism and fake news at one of the world’s largest corporations. I’ll conclude with an interesting discussion question raised by the two guests: If larger news organizations (e.g., New York Times) are able to receive credibility—in code, from an algorithm’s perspective—because of their long history of credible reporting, how do smaller media outlets get trust online? Especially amidst the death of local journalism? For platforms that are (at least in theory) meant to enable free discourse and elevate voices of the otherwise silenced (again, debatable how much this is actually achieved), is the opposite not occurring through efforts to fight fake news?
Justin Sherman is a junior double-majoring in computer science and political science and the Co-Founder and President of the Duke Cyber Team. He is a Fellow at Interact; the Co-Founder and Vice President of Ethical Tech; and a Cyber Policy Researcher at the Laboratory for Analytic Sciences. He has written extensively on cyber policy and technology ethics, including for Journal of Cyber Policy, Defense One, The Strategy Bridge, and the Council on Foreign Relations.
Comments are closed, but trackbacks and pingbacks are open.