Sign up to receive The Commons posts in your inbox.
How Should We Regulate the Social Media Companies? Hint: More Competition Might not be the Answer
natanaelginting – stock.adobe.com
Donald Trump threatened to close Twitter down a day after the social media giant marked his tweets with a fact-check warning label for the first time. The president followed this threat up with an executive order that would encourage federal regulators to allow tech companies to be held liable for the comments, videos, and other content posted by users on their platforms. As is often the case with this president, his impetuous actions were more than a touch self-serving and legally dubious absent a congressionally legislated regulatory framework.
Despite himself, Trump does raise an interesting issue—namely whether and how we should regulate the social media companies such as Twitter and Facebook, as well as the search engines (Google, Bing) that disseminate their content. Section 230 of the Communications Decency Act largely immunizes internet platforms from any liability as a publisher or speaker for third-party content (in contrast to conventional media).
Should this statute be amended? Are there First Amendment considerations of which we should be mindful, as Timothy Wu has suggested? Does market competition per se provide a way forward?
Section 230 itself directed the courts to not hold providers liable for removing content, even if the content is constitutionally protected. Moreover, it doesn’t direct the Federal Communications Commission (FCC) to enforce anything, which calls into question whether the FCC does in fact have the existing legal authority to regulate social media (see this article by Harold Feld, senior vice president of the think tank Public Knowledge, for more elaboration on this point). Nor is it clear that vigorous antitrust remedies via the Federal Trade Commission (FTC) would solve the problem, even though FTC Chairman Joe Simons suggested last year that breaking up major technology platforms could be the right remedy to rein in dominant companies and restore competition.
What kind of competition are we talking about here? In traditional competitive markets, breaking up concentrated monopolies should be applauded: for example, if a monopolist tried to buy up all the shoe shine stands or Amazon tried to buy up all of the bookstores in order to jack up prices, DOJ antitrust enforcement would be a good thing.
The problem with social media platforms that are publication or speech vehicles is somewhat different. Hence, it is unclear how breaking up the social media behemoths and turning them into smaller entities would automatically produce competition that would simultaneously solve problems like fake news, revenge porn, cyberbullying, or hate speech. In fact, it might produce the opposite result, much as the elimination of the “fairness doctrine” laid the foundations for the emergence of a multitude of hyper-partisan talk radio shows and later, Fox News.
The aftermath of the abolition of the fairness doctrine points to some of the unresolved contradictions in terms of regulating of social media. Market competition for viewers has led to an expansion of viewpoints being expressed in conventional media, as well as digital platforms, which in turn has led to the explosion of “fake news”. Which suggests that simply advocating for increased market-based competition in and of itself represents no real regulatory solution. A plethora of mini-Facebooks could well lead to a further profusion of extremism, hate speech and outright disinformation in the competition for eyeballs and click bait.
As things stand today, existing legal guidelines for digital platforms in the U.S. fall under Section 230 of the Communications Decency Act. The legislation broadly immunizes internet platforms from any liability as a publisher or speaker for third-party content. By contrast, a platform that publishes digitally can still be held liable for its own content, of course. So, a newspaper such as the New York Times or an online publication such as the Daily Beast could still be held liable for one of its own articles online, but not for its comments section.
Oregon Senator Ron Wyden, an advocate of Section 230, argued that “companies in return for that protection—that they wouldn’t be sued indiscriminately—were being responsible in terms of policing their platforms.” In other words, the quid pro quo for such immunity was precisely the kind of moderation that is conspicuously lacking today. However, Danielle Citron, a University of Maryland law professor and author of the book Hate Crimes in Cyberspace, suggested there was no quid pro quo in the legislation, noting that “[t]here are countless individuals who are chased offline as a result of cyber mobs and harassment.”
Given this ambiguity, many still argue that the immunity conferred by Section 230 is too broad. Last year, Republican Senator Josh Hawley introduced the Ending Support for Internet Censorship Act, the aim being to narrow the scope of immunity conferred on large social media companies by Section 230 of the Communications Decency Act. Under Hawley’s proposals, Google or Bing would not be allowed to arbitrarily limit the range of political ideology available. The proposed legislation would also require the FTC to examine the algorithms as a condition of continuing to give these companies immunity under Section 230. Any change in the search engine algorithm would require pre-clearance from the FTC.
But Hawley’s rationale for introducing this legislation is to ensure the neutrality of the digital platforms, to ensure that all political viewpoints are adequately represented. Paradoxically, efforts to retain a platform’s political neutrality could well create disincentives against moderation and in fact encourage platforms to err on the side of extremism (which might inadvertently include the dissemination of misinformation).
Resolving this bundle of contradictions is not going to be solved via an impetuous and self-serving Executive Order, and a more holistic approach to the issue of social media regulation will likely have to wait until the conclusion of the 2020 election. A mindless pursuit of market competition is hardly a panacea for First Amendment enthusiasts. We managed to have the civil rights revolution even though radio and TV and Hollywood were regulated, so there is no reason to think that a more robust series of regulations of social media will throw us back into the political dark ages or stifle free expression. The experiment in letting anybody say whatever he/she wants, true or false, and be heard instantly around the world via a tweet or a Facebook post has done little to serve the cause of free speech, or enhance the quality of journalism, than it has to turn a few social media entrepreneurs into multi-hundred millionaires or billionaires. Congress therefore should call Trump’s bluff on social media, by crafting regulation appropriate for the 21st century.Return to the Commons
Our present predicament, characterized as it by an emboldened and rapacious post-U.S. Capitol siege Big Tech edifice all too eager to dutifully serve as a repressive ruling class appendage, was perfectly encapsulated on Friday by two of my Commons co-bloggers. R.R. Reno, writing at First Things, observed that “Big Tech’s power to define the limits of free […]
Parler, the alternative to Twitter, is being strangled by the tech giants. Apple and Google removed the app from their app stores. Amazon removed the company from its web-hosting service. These companies claim these actions serve the public interest. Whatever one thinks of last week’s events, this action in concert marks a milestone. In recent […]