Our smart devices are listening, our faces and bodies routinely scanned and our preferences tracked. But four alumni are leading efforts to make sure new technologies don’t infringe on our civil and constitutional rights.
By Liz Leyden; Illustrations by Harry Campbell; Photography by Brad Paris
It feels like a harmless trade-off: give up a little privacy, gain a little happiness—or at least a little efficiency. We connect with far-flung friends on Facebook and have instant access to the latest news and ideas on Twitter. We shop quickly and conveniently 24/7 and swap paper maps for digital devices that plot our routes in real time to the doctor, to Disney World, to protests on the National Mall.
Yet our footprints are forever left online, to be collected, analyzed and optimized. How that information is gathered and used, with or without our consent, presents challenges unforeseen by the founding fathers. What happens to public discourse, for example, when Facebook algorithms influence what we read in our feeds? Or when a law enforcement agency builds a case against a person based on their social media “likes” and “shares”? Or when government leaders at all levels, including the president of the United States, block anyone they disagree with from their Twitter feeds?
Four Williams alumni are wrestling with these kinds of questions, raising awareness and holding public officials and purveyors of big data accountable. Jameel Jaffer ’94, executive director of the Knight First Amendment Institute at Columbia University, focuses on freedom of speech and of the press in the digital age. Rachel Levinson-Waldman ’95, senior counsel at the Brennan Center for Justice, studies issues related to government and law enforcement’s use of surveillance. Andrew Guthrie Ferguson ’94, a law professor at the University of the District of Columbia, researches predictive policing and whether Fourth Amendment protections include the data on our devices. And Jay Stanley ’89, a senior policy analyst at the American Civil Liberties Union (ACLU), works to uncover emerging technologies that have the potential to prey on personal privacy.
“We all have a thousand streams of information about a thousand of the world’s problems coming into our ears every day,” Stanley says. “Privacy is an issue most people care about, but often they don’t understand the ways in which their privacy is being invaded, and they also often feel resignation about it. You know, ‘What can you do?’” Plenty, these alumni believe.
Protecting Public Discourse
When President Donald J. Trump began blocking his critics from his Twitter feed, those critics turned to the Knight First Amendment Institute for help.
The institute sent a letter to the White House asking that users be unblocked, arguing that the president’s account operates as a public forum in which free speech is protected by the Constitution. The White House ignored the letter, so the institute sued—and won.
“The First Amendment applies to these accounts in the same way it applies to offline forums like town halls,” says Jameel Jaffer ’94, the institute’s executive director. “It seems clearer and clearer every day that political discourse that used to take place in these physical world, real world, analog forums now takes place online.”
Jaffer is no stranger to high-profile litigation. He spent 15 years at the ACLU, where his work on issues like national security and freedom of speech helped rein in widespread surveillance by the National Security Agency and prompted the publication of the Bush administration “torture memos” and the Obama administration “drone memos.”
“I got to work on the stuff that I really wanted to work on, the stuff you opened up the paper in the morning and you’d think, ‘Why isn’t somebody doing something about this?’” Jaffer says. “And I could go into work and do something about it, or at least try to. It was an incredible privilege.”
So, in 2016, when Columbia University and the John S. and James L. Knight Foundation created the Knight Institute, Jaffer was the natural choice to lead it.
He says a big reason behind the organization’s founding “was the thinking that we needed an institute that can both engage in serious scholarly research and develop a vision for what the First Amendment ought to look like in this new era. “And also fight for that vision in court,” he adds.
In addition to the Twitter case against Trump, the institute sued the current administration for records detailing how the government scrutinized the social media accounts of potential immigrants and naturalized citizens. It cautioned that the indictment of WikiLeaks founder Julian Assange treats “everyday journalistic practices as part of a criminal conspiracy.” And it joined in lawsuits seeking visitor logs at the White House and Mar-a-Lago, Trump’s private club in Palm Beach, Fla., and challenging the government’s authority to review the writing of former intelligence and military personnel.
Another priority, Jaffer says, is learning more about which voices are amplified and which are suppressed on social media. To that end, the institute is negotiating with Facebook to allow journalists and scholars to create temporary research accounts and use automated tools to mine publicly available data about users in an effort to study how information moves on the site and how algorithms respond to different profiles. Both practices are currently against Facebook’s terms of service, but Jaffer says understanding how the platform works is important to the public interest.
“The most important journalism … and research right now is focused on the way social media platforms shape and distort public discourse,” he says.
But it’s the Trump Twitter case that has received the most attention. In March, Jaffer returned to court to argue the appeal of Knight v. Trump. By then, the impact of the original ruling had already rippled across the country, with reports of people citing the case as they successfully challenged public officials—Republican and Democrat—on the local level who similarly blocked users.
While Jaffer calls the initial success “gratifying,” he adds, “We need to come up with a way of ensuring that the rules that apply in this new digital environment are ones that will preserve rather than compromise the democratic principles that all of us want.”
Keeping Tabs on Surveillance
With police departments across the country embracing technology, especially social media, Rachel Levinson-Waldman ’95 is paying attention.
As senior counsel for the Brennan Center for Justice at New York University, she has testified before the New York City Council about
the need to disclose the source code used in policing algorithms. In Maryland, she urged lawmakers to clarify rules police follow in order to use fake cellphone towers, known as Stingrays, to track suspects. And, in Memphis, a federal judge appointed her to a monitoring team to help create guidelines governing digital surveillance following revelations that police had spied on members of the Black Lives Matter movement using a fake social media account.
Levinson-Waldman also works to educate the public about the impact of surveillance on privacy rights and to engage their help in pressing law enforcement agencies and politicians to distinguish between what is possible versus what is ethical. “Are governments themselves going to put out policies that restrict what they can do?” she asks. “The history is often that governments do not come to that conclusion on their own.”
“There are so many reasons one would write something on social media. … And the fact that it was taken as a signal of gang affiliation is incredibly worrisome.” —Rachel Levinson-Waldman ’95
Levinson-Waldman’s path to a career defending constitutional freedoms began in Austin, Texas. Her father was a professor of constitutional law at the University of Texas, and her mother was a writer. The family’s home buzzed with conversations about liberty, law and justice.
“I’ve always felt very strongly about serving the public interest in some way, at some basic level feeling like I’m making a contribution,” Levinson-Waldman says.
At Williams, she volunteered with the rape crisis hotline. After law school at the University of Chicago, she worked with victims of
domestic violence in Seattle, was a litigator in the Civil Rights Division of the Department of Justice and later safeguarded academic freedoms at the American Association of University Professors.
Today, Levinson-Waldman’s work at the Brennan Center focuses on ways police use social media to monitor people’s behavior. That might mean using a hashtag search to identify people attending a rally, or analyzing the social networks and activities of gangs to predict future crimes. Both have implications for political speech and privacy protections.
Among the cases that have drawn the Brennan Center’s scrutiny is the 2012 arrest of a young New Yorker named Jelani Henry on attempted murder charges. Police had conflicting eyewitness testimony but linked Henry to a local gang based on his social media “likes.” He spent 19 months on Rikers Island before his case was dismissed. For his part, Henry testified that he “liked” certain posts to avoid being called out by his peers.
“The fact that he was seen as a member of a gang, really because of his social media activity, was basically what landed him in Rikers,” Levinson-Waldman says. “There are so many reasons one would write something on social media, look at something, ‘like’ something, retweet it. And the fact that it was taken as a signal of gang affiliation is incredibly worrisome.”
She recently helped write a bill limiting how law enforcement can use social media, tentatively dubbed the HASHTAGS Act—short for Halt Authoritarian Spying by Harnessing Transparency Against Government Surveillance. It would require police departments to have data policies, prohibit intelligence gathering around First Amendment activity and set limits on how long data can be retained. The act was expected to be introduced in the U.S. House of Representatives in late spring.
The Brennan Center also studied 157 law enforcement jurisdictions across the U.S. that spent at least $10,000 on social media monitoring software, finding that only 18 had publicly available policies in place about how they would use and store the data gathered in investigations.
The lack of well-defined and transparent guidelines, Levinson-Waldman says, spills into all areas of digital surveillance. As technologies evolve, so does the potential for misuse. But on the local level, she is starting to see progress. Among other things, city councils across the country are beginning to pass surveillance transparency ordinances requiring disclosures from police. “This is why the push for transparency is really important,” she says. “People don’t even know how to take action or what to take action on if they don’t know what’s out there.”
Questioning Crime Data
World of high-tech policing began in a decidedly low-tech place: The Superior Court of the District of Columbia. It was 2006, on the cusp of an era in which local policing would soon take a sharp digital turn. Ferguson, then a public defender, noticed that the officers he was cross examining were testifying, again and again, that his clients had been arrested in “high-crime areas.” Data from the Metropolitan Police Department’s growing team of crime analysts was being used to determine where these areas were located. So, Ferguson asked to see it.
“I started litigating it,” he says. “If you’re going to have crime data, you have to bring evidence.”
Thus began Ferguson’s work to shed light on the ever-expanding intersection of data collection and law enforcement. His 2017 book, The Rise of Big Data Policing, sounded the alarm on the use of new technologies, from automated license plate readers to networks of video surveillance cameras. While proponents consider these technologies to be objective tools that can help police locate suspects and solve crimes, critics say they gather data without consent and reproduce racial biases in policing practices.
“I realized there was a story to be told about how these changes in technology were distorting what police did, who they targeted, where they patrolled and how they investigated,” Ferguson says. “And, in many ways, how it was changing the power relationship between citizens and police.”
Now a law professor at the University of the District of Columbia, Ferguson is an expert on juries, the Fourth Amendment in the digital age and predictive policing—the effort to use data analysis to anticipate where crimes will happen. In his book, he refers to the data behind predictive policing as “black data,” because it is both “largely hidden within complex algorithms” and often embedded with racial bias.
“If you’re going to use this technology, you have to ask hard questions ahead of time,” he says. “Is it going to reinforce bias in communities? Is it going to empower police at the expense of citizens in ways we don’t want? Is it actually going to change the Fourth Amendment?”
Any time Ferguson gives a talk, he asks the audience two questions: How many people know what surveillance technologies are currently in use in your city? And where can you go to find out? For the first question, a few hands go up. But for the second: “No one raises their hand, even city officials,” he says. “Big data policing is a democracy problem. It’s about having citizens be aware of the technologies applying to them and what to do about it.”
Ferguson is also a technology fellow at New York University’s Policing Project, whose goal is to help communities address potentially divisive policing issues before problems arise. He’s part of a team writing digital surveillance guidelines for police that will help define what technologies should be allowed, how they’ll be used and what policies and practices should be implemented before they’re deployed.
“That’s a piece of trying to educate and give the tools necessary to get the ultimate decision into the hands of our democratic leaders, to get communities interested in how they’re being policed and pushing back on unthinking surveillance,” he says.
Civic-mindedness has been a constant throughout Ferguson’s career. His first book, Why Jury Duty Matters: A Citizen’s Guide to Constitutional Action (NYU Press, 2012), led to his starring role in the official court video that’s shown to 30,000 jurors in D.C. each year.
“It’s like being the star of a low-rated, reality T.V. show,” he says with a laugh. But the video and book present an important case: that jury duty makes democracy work. So does the Constitution. Citizens need to own their rights and ask questions, Ferguson says, something he first learned in college during his political theory seminars.
“One of the lessons I got from Williams was that inquiry mattered, questioning mattered, and it was our obligation as graduates of Williams to keep inquiring, keep challenging and keep asking those questions,” he says. “And I see that thread in my work today.”
Focusing on the Future
When news broke last year that security personnel at a Taylor Swift concert in California secretly used facial recognition technology to scan the audience for stalkers, Jay Stanley ’89 weighed in.
Though he expressed sympathy for Swift’s security concerns, Stanley, a senior policy analyst who has spent nearly two decades at the ACLU, told NBC: “People should know about this, preferably before they buy their ticket.”
Once relatively obscure, facial recognition technology is now used by iPhone users to unlock their devices, casinos looking for high rollers and cheaters, and Facebook and other apps to identify individuals and even cats in photos. It’s one of the many new technologies Stanley has analyzed through a Constitutional lens. A member of the ACLU’s Speech, Privacy and Technology Project and editor of the “Free Future” blog, he acts as both educator and advance scout, looking ahead to anticipate what might infringe on civil liberties down the road.
“The technology is just moving so fast that our legal institutions, our social norms, our intuitions can’t keep up. … My biggest macro concern is … we’re going to constantly feel we are being monitored and evaluated.” —Jay Stanley ’89
“I tend to do the issues for the ACLU that are a little bit in front of the bow,” he says. “They’re so new they haven’t yet become subjects of litigation or lobbying, but the world still wants to know what we think.”
He describes his role as “think-tanky,” a happy fit for a political science major with a philosophy habit on the side. “I feel like I got such a fantastic education,” he says of Williams. “It gave me the breadth I need to think about brand-new technologies and where they may go.”
Stanley’s blog posts carry thought-provoking headlines like “The Costs of Forcing an Online Haven for Racists Off the Internet” and “How Lie Detectors Enable Racial Bias.” And he has spoken and written about a wide range of issues, from Amazon’s digital home assistant Alexa eavesdropping in living rooms to see-through body scanners used to screen for explosive devices in subway stations.
In 2013, Stanley heard that small-town police departments were beginning to use body cameras—a means of government surveillance that at the same time could provide oversight of the police. He wrote a report laying out how privacy threats could be mitigated by clear oversight and accountability policies, enabling the technology to serve both police and the public.
Less than a year after the paper’s publication, a white police officer, Darren Wilson, shot an unarmed black teenager, Michael Brown, in Ferguson, Mo. With conflicting eyewitness accounts and no video recordings of the confrontation, Brown’s family mounted a campaign to get more police around the nation to wear cameras. Stanley’s paper became a reference for city councils across the country in local debates about implementing the technology.
“Too many police departments are not adopting the recommended policies,” he says. “But we at least successfully raised a set of questions that were asked about the technology as it hit the big time.”
Stanley aims to reach as many audiences as he can. He regularly visits law schools and appears on panels from D.C. to Brussels, where he recently spoke about facial recognition technology at a conference about data protection and democracy.
“The technology is just moving so fast that our legal institutions, our social norms, our intuitions can’t keep up and adapt,” he says. “My biggest macro concern is that all these streams of data are going to flow together into a big river, and we’re going to constantly feel we are being monitored and evaluated.”
Ultimately, he says, no matter what the technology is or who is using it—government agency or private corporation—the key for citizens is transparency.
“People have a different sense of what the balance should be between government power and individual protections,” Stanley says. “But everybody, whether they agree with the ACLU or not, should agree we live in a democracy, and these technologies raise serious policy questions [that] should be answered democratically. But that can’t happen if they’re deployed in secret.”
Liz Leyden is a writer living in New Jersey.