RECOMMENDED READING

natanaelginting – stock.adobe.com

Donald Trump threatened to close Twitter down a day after the social media giant marked his tweets with a fact-check warning label for the first time. The president followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms. As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.

Despite himself, Trump does raise an interesting issueā€”namely whether and how we should regulate the social media companies such as Twitter and Facebook, as well as the search engines (Google, Bing) that disseminate their content. Section 230 of the Communications Decency Act largely immunizes internet platforms from any liability as a publisher or speaker for third-party content (in contrast to conventional media).

Should this statute be amended?  Are there First Amendment considerations of which we should be mindful, as Timothy Wu has suggested?  Does market competition per se provide a way forward?

Section 230 itself directed the courts to not hold providers liable for removing content, even if the content is constitutionally protected. Moreover, it doesnā€™t direct the Federal Communications Commission (FCC) to enforce anything, which calls into question whether the FCC does in fact have the existing legal authority to regulate social media (see this article by Harold Feld, senior vice president of the think tank Public Knowledge, for more elaboration on this point). Nor is it clear that vigorous antitrust remedies via the Federal Trade Commission (FTC) would solve the problem, even though FTC Chairman Joe Simons suggested last year that breaking up major technology platforms could be the right remedy to rein in dominant companies and restore competition.

What kind of competition are we talking about here? In traditional competitive markets, breaking up concentrated monopolies should be applauded:  for example, if a monopolist tried to buy up all the shoe shine stands or Amazon tried to buy up all of the bookstores in order to jack up prices, DOJ antitrust enforcement would be a good thing.

The problem with social media platforms that are publication or speech vehicles is somewhat different. Hence, it is unclear how breaking up the social media behemoths and turning them into smaller entities would automatically produce competition that would simultaneously solve problems like fake news, revenge porn, cyberbullying, or hate speech. In fact, it might produce the opposite result, much as the elimination of the ā€œfairness doctrineā€ laid the foundations for the emergence of a multitude of hyper-partisan talk radio shows and later, Fox News.

The aftermath of the abolition of the fairness doctrine points to some of the unresolved contradictions in terms of regulating of social media. Market competition for viewers has led to an expansion of viewpoints being expressed in conventional media, as well as digital platforms, which in turn has led to the explosion of ā€œfake newsā€. Which suggests that simply advocating for increased market-based competition in and of itself represents no real regulatory solution. A plethora of mini-Facebooks could well lead to a further profusion of extremism, hate speech and outright disinformation in the competition for eyeballs and click bait.

As things stand today, existing legal guidelines for digital platforms in the U.S. fall under Section 230 of the Communications Decency Act. The legislation broadly immunizes internet platforms from any liability as a publisher or speaker for third-party content. By contrast, a platform that publishes digitally can still be held liable for its own content, of course. So, a newspaper such as the New York Times or an online publication such as the Daily Beast could still be held liable for one of its own articles online, but not for its comments section.

Oregon Senator Ron Wyden, an advocate of Section 230, argued that ā€œcompanies in return for that protectionā€”that they wouldnā€™t be sued indiscriminatelyā€”were being responsible in terms of policing their platforms.ā€ In other words, the quid pro quo for such immunity was precisely the kind of moderation that is conspicuously lacking today. However, Danielle Citron, a University of Maryland law professor and author of the book Hate Crimes in Cyberspace, suggested there was no quid pro quo in the legislation, noting that ā€œ[t]here are countless individuals who are chased offline as a result of cyber mobs and harassment.ā€

Given this ambiguity, many still argue that the immunity conferred by Section 230 is too broad. Last year, Republican Senator Josh Hawley introduced the Ending Support for Internet Censorship Act, the aim being to narrow the scope of immunity conferred on large social media companies by Section 230 of the Communications Decency Act. Under Hawleyā€™s proposals, Google or Bing would not be allowed to arbitrarily limit the range of political ideology available. The proposed legislation would also require the FTC to examine the algorithms as a condition of continuing to give these companies immunity under Section 230. Any change in the search engine algorithm would require pre-clearance from the FTC.

But Hawleyā€™s rationale for introducing this legislation is to ensure the neutrality of the digital platforms, to ensure that all political viewpoints are adequately represented. Paradoxically, efforts to retain a platformā€™s political neutrality could well create disincentives against moderation and in fact encourage platforms to err on the side of extremism (which might inadvertently include the dissemination of misinformation).

Resolving this bundle of contradictions is not going to be solved via an impetuous and self-serving Executive Order, and a more holistic approach to the issue of social media regulation will likely have to wait until the conclusion of the 2020 election. A mindless pursuit of market competition is hardly a panacea for First Amendment enthusiasts. We managed to have the civil rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression. The experiment in letting anybody say whatever he/she wants, true or false, and be heard instantly around the world via a tweet or a Facebook post has done little to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social media entrepreneurs into multi-hundred millionaires or billionaires.  Congress therefore should call Trumpā€™s bluff on social media, by crafting regulation appropriate for the 21st century.

Marshall Auerback
Marshall Auerback is a researcher at Bard College's Levy Economics Institute, a fellow of Economists for Peace and Security, and a writer for the Independent Media Institute.
@Mauerback
Recommended Reading
Donā€™t Leave Social Media Regulation to the Platforms, Bring in the FCC

Coming to terms with the importance of free speech means coming to terms with the reality that free speech will sometimes be used for abhorrent purposes. We protect bad speech on the grounds that the alternativeā€”censorshipā€”is even worse.

In Praise of Big Internet: the Economic Importance of Internet Companies

It has become bipartisan sport to attack ā€œBig Techā€, but most of the ire is directed at ā€œBig Internetā€: consumer-facing Internet companies like Amazon, Google, Facebook, Twitter, and Uber.

Protecting Children from Social Media

American Compass policy director explores policy optionsĀ to protect children online with the same vigor that we protect them in the real world.