In an ideal world, American voters use correct, carefully reported news to inform their decisions. But in the last few months of this election, fake or misreported news was rampant, subtly changing public opinion and arguably influencing election results.
In the last three months of the election, the top fake political news stories garnered more engagement than the top stories from mainstream media outlets. Analysis by Buzzfeed found that the top 20 hoax stories alone gained 8,711,000 shares, reactions, and comments on Facebook, where the top 20 mainstream news stories earned only 7,367,000.
Despite Mark Zuckerberg's likely accurate claim that less than 1% of the content available on Facebook is false, the reality is that hoax news—by design and by human nature—now spreads more quickly and is more viral than factual reporting.
Why is it important?
Research shows 44% of all American adults get news on Facebook and 61% of millennials use Facebook as their primary news source. The prevalence of hoax news on the site is a big deal. By infecting what we know, it affects what we think, how we vote, and how we act in our democracy.
Two pieces of the conversation are being overlooked:
- The term "fake news" ignores the difference between fake information and false political reporting. To quote New Yorker writer Jelani Cobb, "fake news is Angelina Jolie to Leave Brad Pitt for Space Alien. What we're talking about now is propaganda."
- Propaganda on Facebook is one small piece of the larger problem of the "echo chamber" or "filter bubble" created by insular communities on social media—and in real life. From the process of choosing like-minded friends to the personalization of our search results, we are both self-segregating and being siloed based on our views.
Should Facebook do more to prevent the spread of fake news?
Facebook is going above and beyond to prevent fake news already—and it should not. It is a communication device, not a media outlet. It is giving us what we want. Facebook has no obligation to change, unless government regulated. In fact, the consumer wouldn't benefit—another company would only take its place.
Mark Zuckerberg has already said Facebook is pursuing projects to prevent misinformation: improving detection, exploring a warning system, changing advertising policies for fake news sources, and more.
These anti-hoax measures are the real problem, not fake news.
First, they are not in Facebook's financial interest. We may see it as a service, but it's a company with shareholders. If anti-hoax policies prioritize one political side over another (and studies show they might disproportionately affect right-leaning sites) Facebook would lose its neutral status and, with it, users. Further, optimizing for anything but engagement by definition decreases the amount of attention paid to the site, which in turn affects advertising revenue.
Second and more importantly, allowing Facebook to influence what people see for political reasons is effectively sanctioning information control. If, at some point, Facebook's leaders hold a different political point of view than you do, do you want to allow them to decide what is and what is not appropriate news? Or would you rather that choice be left to us, the public, and try to be conscientious as we read and share whatever news we want?
Facebook can and should prevent itself from being polluted by fake news. By not addressing fake news, Facebook is allowing misinformation to drive user behavior for it instead.
Mark Zuckerberg says Facebook need not prevent the spread of fake news because it has "no influence" on elections. But Facebook repeatedly boasts that it affects user behavior. The company has proudly highlighted how it can use social influence to get people to vote. It tells advertisers how well the platform drives consumers to purchase their products. Facebook affects how people behave in all these scenarios, but not at the ballot box? That's just not true.
Gmail's spam filter is an example of filtering communications and information for good. Gmail's spam system prevents our inboxes from being polluted by junk—fake advertising, unwanted solicitations, hoax messages, and potentially dangerous viruses.
The need to address fake news leads to an understandable concern: who decides what is acceptable news, and how? There are real risks here to freedom of information, but we can address them while also curtailing fake news. Useful suggestions include:
- Make it easier for users to report fake news.
- Allow media sites to send metadata about fact-checking.
- Expand systems to verify sources (and remove verification if abused).
- Recommend information outside a user's usual choices.
... and many more. It's time to seriously address a problem that is damaging our decision-making and our democracy.
- Buzzfeed's analysis of how fake news spreads on Facebook
- "Up until those last three months of the campaign, the top election content from major outlets had easily outpaced that of fake election news on Facebook. Then, as the election drew closer, engagement for fake content on Facebook skyrocketed and surpassed that of the content from major news outlets."
- Jeff Jarvis' suggestions for combating fake news
- "Key to our suggestions is sharing more information to help users make better-informed decisions in their conversations: signals of credibility and authority from Facebook to users, from media to Facebook, and from users to Facebook."
- Ben Thompson on the risk in deciding what news is acceptable (login required)
- "The cautionary tale that “fake news is bad” writes itself. My takeaway, though, is the exact opposite: it matters less what is fake and more who decides what is news in the first place."
- Vox on how fake news predates Facebook
- "We might be tempted to regard this story as one needing a technological fix: After all, Facebook’s algorithms caused the problem, so they should be able to fix it too. Yet such techno-solutionism obscures the broader context of media and politics that fertilized the ground for many fake news sites to thrive, especially on the right."