Commentary: Facebook's Algorithm vs. Democracy
COMMENTARY: From filter bubbles to micro-targeting, Facebook has upended the Democratic process.
Over the last several years, Facebook has been participating—unintentionally—in the erosion of democracy.
The social network may feel like a modern town square, but thanks to its tangle of algorithms, it’s nothing like the public forums of the past. The company determines, according to its interests and those of its shareholders, what we see and learn on its social network. The result has been a loss of focus on critical national issues, an erosion of civil disagreement, and a threat to democracy itself.
Facebook is just one part—though a large part—of the Big Data economy, one built on math-powered applications that are based on choices made by fallible human beings. Many of the algorithms built for the Big Data economy contain mistakes that can undermine solutions to the problems they hope to solve.
In 2008, when the economy crashed, I witnessed the power of these “ Weapons of Math Destruction ” firsthand from my desk at a hedge fund in New York City. Since then, I have used my training as a mathematician to study how these algorithms can influence our lives, and what I’ve found has not always been positive
In many cases, WMDs define their own reality to justify their results. Facebook’s algorithm, for example, contains such a feedback loop: Engaging content is shared more, causing the algorithm to find more content like it to show to Facebook users. In some cases, it shows people photos of their best friend’s newborn. In others, it serves links to strangely compelling articles of dubious origin and veracity.
Data scientists working for Facebook are singularly focused on extending the length of time each of us remains “engaged” on the platform. Facebook does not optimize for truth, for learning, or for civil conversation. They measure success in clicks, likes, and comments, which happen whether or not we are having high quality conversations. The more engagement, the more data they can use to sell advertisements.
So far, Facebook has made some small steps toward vetting the content of our newsfeeds for truthfulness and holding its advertisers to reasonable standards. “We have always taken this seriously, we understand how important the issue is for our community and we are committed to getting this right,” Facebook CEO Mark Zuckerberg said in a statement a few weeks ago. However, based on his public statements, those efforts will likely rely heavily on algorithms and user reporting, which can both be gamed by bad actors. It’s too early to tell how effective those measures will be.
But what we do know is that as long as content on Facebook resonates with people, the platform’s algorithm ensures that it thrives.
Getting Out the Vote
Facebook’s reach is expansive. As I write this, about two-thirds of American adults have a profile on Facebook. They spend an average of 50 minutes a day on the site , and 60% of them count on Facebook to deliver at least some of their political news. Young people, especially, get their news from Facebook, and they are particularly bad at distinguishing real news sources from fake ones .
The platform’s reach doesn’t necessarily translate into influence, but fortunately Facebook has demonstrated that it can influence the democratic process. During the 2010 and 2012 elections, Facebook conducted experiments to hone a tool they called the “voter megaphone.” The idea was to encourage people to spread word that they had voted. By sprinkling people’s news feeds with “I Voted” updates, Facebook was encouraging Americans—more than 61 million of them at the time—to carry out their civic duty and make their voices heard.
Facebook can use knowledge gained through experiments to influence people’s actions.
The messages that Facebook users received included a display of photos of six of the user’s Facebook friends, randomly selected, who had clicked the “I Voted” button. The researchers also studied two control groups, each numbering around 600,000. One group saw the “I Voted” campaign, but without the pictures of friends. The other received nothing at all.
By posting about people’s voting behavior, the site used peer pressure to encourage people to vote. Other studies have shown that the quiet satisfaction of carrying out a civic duty is less likely to move people than the possible judgment of friends and neighbors.
Within hours, Facebook harvested information from tens of millions of people—or more—measuring the impact that their words and shared links had on each other. And it demonstrated how it could use that knowledge to influence people’s actions, which in this case happened to be voting.
People paid much more attention when the “I Voted” updates came from friends, and they were more likely to share those updates. About 20% of the people who saw that their friends had voted also clicked on the “I Voted” button. Among those who didn’t get the button from friends, only 18% did. We can’t be sure that all the people who clicked the button actually voted or that those who didn’t click it stayed home. But assuming they did, researchers estimated that their campaign had increased turnout by 340,000 people. To put that number in perspective, just 80,000 votes decided the most recent presidential election. In other words, by tweaking its algorithm and selecting the news that we see, Facebook can influence the political system.
It’s not all Facebook’s fault, of course. We pick our own friends, and we contribute to our own miseducation by keeping our friendships narrow, unfriending people who disagree with us, being disagreeable ourselves, and clicking on and sharing stories that appeal to our biases without fact-checking.
When we only talk to our friends, we end up becoming more and more partisan as loud, emphatic voices overwhelm those who are less sure; echo chambers do not lend themselves to a broadening of minds. Also, we tend to talk on and on about the same topics in the same ways rather than a considering a broad range of perspectives, as the Wall Street Journal showed us on their “Blue Feed, Red Feed ” experiment which examined the content of conservative versus liberal Facebook environments.
One way of testing how Facebook’s algorithms influence the quality of these feeds is to have people switch feeds for a while. This is exactly the experiment conducted by the Guardian , as described in the article “ Bursting the Facebook Bubble .” One participant described the experience as being close to waterboarding, while another mentioned that he hadn’t known positive coverage of Hillary Clinton even existed. But the most important perspective, coming from both sides, was that the other side was being given wrong information. This was described by a participant as “like reading a book by a fool.”
Thanks in part to filtering and personalization by sites like Facebook and others, our information has become deeply unbalanced, skewed, and has lost its mooring. Where we used to have serious newspaper editorial teams sifting fact from opinion, we now have Facebook algorithms optimizing on engagement and ad revenue. Where we used to have contextualized and consistent reporting, we now have decontextualized feeds that put real news from the Washington Post on the same footing as fake news from AmericanNews.com .
Which is not to say that old-school newspapers did a perfect job, but their reputation—and thus subscription and advertiser revenue—relied on getting the facts right. That’s more than we can say about the modern made-for-Facebook fake news sites.
The Media’s Quandary
Come to think of it, if we’re complaining about bad information coming from bad news, shouldn’t we stop blaming Facebook and instead blame the media?
It’s tempting, but the story isn’t that simple. The media has been under siege for years by social media—and in particular by Facebook. It’s a complicated story, but the short version is that all newspapers, even big ones like the New York Times , depend crucially on Facebook for traffic. Without the traffic from Facebook, their readership would plummet.
In order to get that traffic, they allow and encourage people to share the stories from their website via the ubiquitous Facebook button, which carries with it a set of code that allows Facebook access to masses of user data on the New York Times and other publications’ sites. The “share” button and the data it gathers further bolsters Facebook’s position largest and most knowledgeable targeted advertising platform on the web.
This dependency on Facebook, in the meantime, creates incentives within media companies to serve up the most clicked and shared story. Often, those stories are not what we would traditionally think of as quality journalism.
For publishers of journalism, there is no viable alternative: Facebook is in charge, at least for now, so media companies have to follow their lead.
Perils of Micro-Targeting
It’s not just the media that has been upended by Facebook, but also political campaigns. Unlike newspapers, though, campaigns have been able to nimbly adapt to this new era by deploying micro-targeted political ads aimed at tightly defined groups of people. These ads earned Facebook upwards of $1 billion this election cycle.
The targeting begins even before the ads are run. In late 2015, the Guardian reported that a political data firm, Cambridge Analytica, had paid academics in the U.K. to amass Facebook profiles of U.S. voters with demographic details and records of each user’s “likes.” They used this information to develop psychographic analyses of more than 40 million voters, ranking each on the scale of the “big five” personality traits, including openness, conscientiousness, extroversion, agreeableness, and neuroticism.
Groups working with the Ted Cruz presidential campaign then used these studies to develop television commercials targeted for different types of voters, placing them in programming they’d be most likely to watch. When the Republican Jewish Coalition was meeting at the Venetian in Las Vegas in May, 2015, for instance, the Cruz campaign unleashed a series of Web-based advertisements visible only inside the hotel complex that emphasized Cruz’s devotion to Israel and its security.
Micro-targeting has made it virtually impossible to hold political campaigns accountable.
According to a Bloomberg article , Cambridge Analytica used those same profiles of American voters at the very end of Trump’s campaign to “bombard” them with Facebook ads. The messages were not exclusive to pro-Trump advertisements. The campaign also used a three-pronged approach to keep certain groups of people from the polls—”voter suppression,” they called it. The targets were idealistic white liberals, young women, and African Americans, according to a Trump campaign official, and each group was sent ads designed to discourage them from voting. One of the ads targeted at suppressing the African American vote reportedly involved a South Park style animation entitled “Hillary Thinks African Americans are Super Predators.”
Clinton’s campaign ran a “get out the vote” micro-targeting campaign that heavily relied on Facebook’s user data to encourage people to vote for the candidate. In one series of ads, they targeted people from Ohio aged 18–34 along with people in Pennsylvania and Florida who had an African-American “affinity,” Facebook’s proxy for race which is based on content users like and share. News of this campaign was reported by the New York Times , which built a browser add-on for readers to submit micro-targeted political ads to the paper.
Micro-targeting through the use of algorithms has made it virtually impossible to hold political campaigns accountable. A nosy journalist used to be able document how a politician’s message changed from town to town simply by following a campaign and paying close attention. Now, that information is much harder to come by. Such transparency is a relic of the past.
End of Accountability?
As a thought experiment, what would the modern journalistic snoop have to do to hold politicians accountable? In today’s data- and algorithm-driven era, it may not be possible because of the sheer amount of work required to fool the algorithms.
It is not enough simply to visit the candidate’s web page, because they, too, use reams of data to automatically profile and target each visitor, weighing everything from their zip codes to the links they click on the page—even the photos they appear to look at.
It’s also fruitless to create dozens of “fake” profiles to probe the myriad ways which campaigns target voters because the campaigns’ systems are hard to fool. They associate each real voter with a wealth of deep, accumulated knowledge, including purchasing records, addresses, phone numbers, voting records, and even social security numbers and Facebook profiles. For a fake profile to convince the system it’s real, each one would have to come with its own load of realistic looking data. Fabricating just one fake profile would require an enormous amount of work for a research project (and in the worst case scenario, it might get the investigator tangled up in fraud).
The depth of secrecy surrounding both political campaigns’ targeted messaging and Facebook’s algorithms has created a system that is easily exploited by fake, misleading, and manipulative information. It has become increasingly difficult for voters to carefully and rationally compare politicians on the issues.
For democracy to function, it needs elections, voters, candidates, and an informed public. The last is absolutely crucial to a healthy democracy. People need to be aware of what politicians actually think, what their plans are, and how they propose to fulfill their promises. It’s the kind of information that’s getting hard—maybe impossible—to come by in the modern Facebook world.
Portions of this article are based chapters in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil and published by Crown Random House.
Image credit: Tim De Chant/WGBH