RECOMMENDED READING

Coming to terms with the importance of free speech means coming to terms with the reality that free speech will sometimes be used for abhorrent purposes. We protect bad speech on the grounds that the alternative—censorship—is even worse. But the rise of social media as both a powerful distribution framework and disseminator of content has created the problem of misinformation and led to questions about whether we’ve found the right balance—or if charges of “misinformation” actually provide cover for outright censorship.

There is also the critical question of who makes the call: Silicon Valley-based social media giants have rarely had to face consequences for the dissemination of misinformation—or outright distortion in the form of fake news—and have profited mightily from it. But with Trump now ousted, the likes of Facebook and Twitter (both of whom were happy to monetize the former president’s words when he was in the White House) are increasingly taking on the role of judge, jury, and executioner.

“So what?” the laissez-faire purists might ask. Unfortunately, for almost half a century, this mentality has prevailed; the elimination of government oversight of all aspects of the private sector has transformed from a reaction against the perceived over-regulation of the 1950s and 1960s to an end in itself. The common refrain one hears today is “If you don’t like what the social media platforms are doing, the solution is not to complain or to regulate them, but to create a competing platform that enables heterodox views to be disseminated.” In other words, “build your own YouTube.” However, as Glenn Greenwald has illustrated in the case of Parler, competition here is a myth: The “three Silicon Valley monopolies—Amazon, Google and Apple—abruptly united to remove Parler from the internet, exactly at the moment when it became the most-downloaded app in the country.” This had the effect of obliterating any potential competitive threat and certainly making a mockery of the idea that unregulated public broadband optimizes a “free market of ideas.”

That said, breaking up the tech oligopolies might sound superficially attractive, but social networks like Facebook or search engines like Google tend to become natural monopolies thanks to so-called “network effects.” As a search engine, therefore, one Google is better than eight competing search engines. Likewise, the value of a social media platform like Facebook increases based on the number of other people on that platform. That force relentlessly pushes in the direction of consolidation.

So, the solution is not to dismantle these large networks (and thereby destroy the network effect benefits), but to do as Cory Doctorow of the Electronic Frontier Foundation has suggested: namely, mitigate the market power issue by reducing the “switching costs,” which he describes as “the things you have to give up when you switch away from one of the big services” (e.g., your followers).

Doctorow elaborates: “You can switch email providers and still connect with your friends; you can change cellular carriers without even having to tell your friends because you get to keep your phone number.” The same principle applies to software that works on a variety of platforms, such as many of the programs offered by Microsoft. Interoperability, the practice of designing new technologies that connect to existing ones would enable a user to “leave Facebook but continue to connect with the friends, communities and customers who stayed behind… If you don’t like Facebook’s rules (and who does?) you could go somewhere else and still reach the people that matter to you, without having to convince them that it’s time to make a move,” Doctorow writes.

This addresses one aspect of the problem. But there is also the question of who does the regulating. The internet is public, much like the public broadcast spectrum is public. That is precisely the reason why the Federal Communications Commission (FCC) regulates the major broadcast companies, such as NBC, CBS, and ABC. The same logic should apply to social media companies, putting them under the domain of a seasoned regulator like the FCC, as opposed to leaving oversight in the hands of Mark Zuckerberg or Jack Dorsey. Having the government regulate the free-for-all on social media is unlikely to circumscribe our civil liberties or democracy one way or the other. We managed to have the civil rights movement even though radio, TV, and Hollywood were overseen by the FCC—there is no reason to think that a more robust set of regulations of social media will throw us back into the political dark ages or stifle free expression.

Unfortunately, for years, attempts to establish an overarching regulatory framework for social media have failed to find consensus in Washington and legislative efforts have foundered. Which has left us in the absurd position of having the president blasting Facebook as “killers” for the company’s alleged failure to combat coronavirus “misinformation.”

Let’s leave aside the fact that our knowledge of COVID-19 and its recommended treatments are constantly evolving: today’s “misinformation” could well be tomorrow’s key scientific insight. The real problem with President Biden’s approach is that it continues to perpetuate the status quo by placing the burden of content regulation back in the hands of these social media behemoths, at a time when network effects have created a robust oligopoly that precludes effective external competition in the marketplace of ideas.

A case could be made that revoking Section 230 of the Communications Decency Act (CDA) could solve this problem. When Congress passed the CDA in 1996, Section 230 stipulated that providers of internet forums would not be liable for user-posted speech, even if they selectively censored some material. Steve Randy Waldman has argued that removing this legal liability would curb the problem of misinformation or “fake news”:

If made liable for posts flagged as defamatory or unlawful, mass-market platforms including Facebook and Twitter would likely switch to a policy of taking down those posts automatically. Incentives would shift: Mass platforms would have to find a balance among maximizing viewership, encouraging responsible posting, and discouraging users who frivolously flag other people’s posts. They would no longer get a free pass when publishing ads that are false or defamatory. Even these platforms’ highest-profile users could not assume that everything they posted would be amplified to millions of other people.

But under Waldman’s proposals, this would still leave the responsibility for content moderation in the hands of the social media platforms themselves. And while they might curb misinformation, they might also curb a genuinely diverse media ecosystem and make it as bland as network television.

Far better to leave the oversight function in the hands of the FCC, with a right to appeal to the courts as a final resort. Currently, “the FCC’s ability to regulate on behalf of the public interest is in many ways confined to the narrow context of broadcasting,” according to Philip M. Napoli of Duke University. Consequently, Congress would need to reimagine the FCC’s concept of public interest in order to justify expanding their regulatory remit into the realm of social media. Napoli has suggested that:

Massive aggregations of [private] user data provide the economic engine for Facebook, Google, and beyond. … If we understand aggregate user data as a public resource, then just as broadcast licensees must abide by public interest obligations in exchange for the privilege of monetizing the broadcast spectrum, so too should large digital platforms abide by public interest obligations in exchange for the privilege of monetizing our data.

Of course, agencies have their biases. And like we see at the Supreme Court, competing ideas of what constitutes “public interest” would likely vary from Democratic to Republican administrations and their appointees. Because this would be an entirely new regulatory space, it would not be surprising for the FCC to gravitate toward the standards it has used to regulate traditional networks. Congress should help provide guidance in the form of a new legislative framework. The experiment in letting anyone say whatever he or she wants, true or false, and be heard instantly around the world at the push of a button has done less to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social media entrepreneurs into multi-hundred millionaires or billionaires. There is a clear role here for government oversight.

Yes, government regulation is at risk of lobbyist pressures and regulatory capture, but the reactions and counterreactions of politics should leave us close to a collective sense of institutional neutrality. No matter what, it would be better than leaving this crucial function in the hands of a few wealthy, self-interested tech oligarchs, as is the case today.

Marshall Auerback
Marshall Auerback is a researcher at Bard College's Levy Economics Institute, a fellow of Economists for Peace and Security, and a writer for the Independent Media Institute.
@Mauerback
Recommended Reading
How Should We Regulate the Social Media Companies? Hint: More Competition Might not be the Answer

Donald Trump threatened to close Twitter down a day after the social media giant marked his tweets with a fact-check warning label for the first time. The president followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms. As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.

How to Put Woke Capital Out of Business

Ramaswamy sees what so many establishment conservatives and libertarians refuse to see: in the eyes of the woke Left, we on the Right are all racists who should be made untouchables.

Protecting Children Online Is a Worthwhile Endeavor

In the digital realm as outside of it, the public has a right to insist on regulations where the safety of our kids is at stake.