Kashmir Hill | September 3, 2020
Professor David Hoffman joined NYT’s tech reporter Kashmir Hill to discuss the pros and cons of face recognition technology.
New York Times technology report Kashmir Hill spoke with Sanford Professor David Hoffman about facial recognition technology, algorithmic bias, and privacy protections on Thursday, September 3rd 2020. Hill started off the conversation by detailing her career path. After receiving her undergraduate degree from Duke, she took a job with a law firm as she was thinking about going to law school in the future. While there, she started writing for the legal blog Above The Law which ignited her passion for journalism. Following graduate work at NYU Journalism, she worked the technology beat at Forbes, Fusion, and Gizmodo before settling at the Times last summer. Coming from someone who did not dream of working in journalism until after college, Hill stressed to students the importance of keeping track when one’s passions are being engaged and not being afraid to switch career paths.
The conversation then moved to the increased tension between traditional privacy protections and the expansion of Big Data and Artificial Intelligence. One big shift in recent years, Hill noted, was that public awareness of the issue has increased dramatically. Early in her career, her reporting was focused more on theoretical harm on Big Tech, while now she has reported on several instances of actual harm to consumers. One of her biggest scoops was about Clearview AI, a company that developed a facial recognition system secretly used by hundreds of law enforcement agencies by scraping billions of images from the Internet. Hill also talked about a story she wrote about a Michigan man who was falsely arrested by the police for shoplifting on the basis of a facial recognition algorithm.
Hill used these examples as evidence for why there needs to be guidance and legislation at the federal level to regulate appropriate uses of this technology before it becomes ubiquitous. Right now, she said, legislation exists only at the state and local level, which leads to uneven applications across jurisdictions. Further reporting is also needed on uneven applications of the algorithms themselves, Hill explained. A key reason why many cities and states have banned the use of facial recognition algorithms is that they are markedly worse performing on people of diverse backgrounds, who are often underrepresented in the databases the AI models are trained upon. Overall, the main theme was that privacy protections need to be updated across the board to respond to changing technologies.