In the wake of Donald Trump’s surreal ascension to POTUS, Facebook CEO Mark Zuckerberg publicly defended his company’s culpability in Trump’s win. Critics have pointed to Facebook’s propagation of fake news via the site’s algorithmically curated “Trending Topics” module and—perhaps even most dastardly—within users’ News Feed as directly responsible for the rise of America’s first potentially authoritarian-ish leader.
Zuck’s public refutation exists within a larger debate as to whether Facebook is or is not a media company, and therefore responsible for the content it serves up. My colleague, Sascha Segan, disagrees with me, but I’ve always found this oft–repeated semantic criticism to be completely off the mark. I’m confounded as to why it’s even up for continued debate.
Zuck is absolutely correct when he says that Facebook is not a media company. There is no Facebook Studios producing original (and potentially politically persuasive) content. To me, it is painfully obvious that Facebook is just a platform and nothing more. That’s it, end of story.
Like all digital platforms, Facebook is a tool of those who choose to use it and it reflects their particular personalities and preferences. Nobody would describe Vizio and Panasonic as media companies—they just build the TVs on which we watch movies and shows. Admittedly, TV isn’t exactly an apt comparison for Facebook, in that the social network’s algorithms are working behind the scenes to choose what content is highlighted. In keeping with the TV metaphor, Facebook could be said to choose what shows get primetime slots and what commercials run between them.
Even the most ardent critics of Facebook’s practices don’t believe that Zuck & Co. purposefully tilted the scales of information to support Trump (indeed, Zuck is unabashedly progressive and COO Sheryl Sandberg isn’t particularly shy about her opinions on the president-elect). The problem—as Facebook’s critics see it—comes down to two overlapping issues: 1) Social media makes it extremely easy to secure oneself in a bubble of only like-minded friends and media outlets, and 2) there is a clear financial incentive for digital “entrepreneurs” to craft click-bait “news” articles that often have little relation to reality (an issue that Facebook has battled).
First, let’s dive into the fake news problem. I consider myself to be a fairly sophisticated media consumer. But on occasion I’ve been duped into clicking on (and even sharing) stories from these fiction factories. These sites justify their existence by describing themselves as “satire,” but they are usually as far from Onion-esque wit as you can get. The people running these sites craft their stories (their headlines in particular) to play on readers’ emotions and pre-existing biases. A quick visit to a site like Snopes.com shows just how much nonsense is really out there. I’ve seen many friends, family, and colleagues fall victim to these sites’ nonsense (many of whom really should know better). It happens.
This fake news industry has really ramped up along with this year’s emotional election cycle. BuzzFeed recently profiled a group of teens in a small town in Macedonia who created a cottage industry that convinced Trump supporters to share and/or click poorly written features with alarmist headlines that only occasionally touch on actual reality. I am choosing not to link to the Macedonians’ website, but I can tell you that it basically ceased operating as of Tuesday—perhaps there is just less clickbait pennies to be made now that the election is over.
This dissemination of truth-less “news” is further compounded by the fact that users can be inundated with nonsense depending on the social circle they maintain and the news outlets they choose to follow. Perhaps Facebook could tweak its algorithms to downplay links from known unreliable sources (this might be particularly useful in the site’s influential Trending section), but outside of a vast intrusion on user freedom, there’s probably not a lot that Facebook can do.
The fault here lies with the user—you and anyone in your social circle who keeps sharing baloney. If a news story sounds a little too good (or bad) to be true, then smart consumers need to be wise enough to check the source (or even go so far as to take the additional step to check on a usually-on-the-ball source like Snopes and correct the record in the comments). Facebook is still a relatively new medium and in many ways, the general public is still playing catch up.
In 1938, Orson Welles produced an infamous radio adaption of War of the Worlds, which caused the most reactive and gullible listeners to barricade themselves in their homes in fear of the impending alien invasion (though the myth of that panic has grown over time). Of course, the program was proceeded by a clear announcement that a performance was about to take place (and anyone who cared to turn the dial could easily find that no global invasion was actually occurring).
Any panic that happened in response to Welles was certainly not the fault of the platform (in this instance, radio); it was the fault of gullible listeners. A fake news broadcast on a single channel wouldn’t cause the same amount of panic today—audiences’ sophistication have adapted to new mediums over time. The same will take place with social media.
In the same way, all but the most gullible Internet users know that Nigerian princes aren’t actually emailing them; Bill Gates won’t share his fortune with anyone who forwards his email (or, in a modern incarnation, shares his posts); and if the URL for a story comes from theonion.com, you shouldn’t believe the absurd headline.
Perhaps Facebook, Google, Twitter, and other major digital platforms can tweak their algorithms to weed out obvious scams and falsehoods, but the onus is on users to become more sophisticated. Don’t blame the platform.