Attack social media’s dangerous design features
Recommended Reading
What’s the Problem?
- Many of social media’s core features, from public display of private information to algorithmic targeting to easily available harmful content, are directly hurting children.
- These features are deliberate design choices that seek to maximize engagement, and even addiction, regardless of the harm to children.
- Parents cannot adequately protect their children from these harms.
Reining in Silicon Valley
Children who use social media are more likely to suffer from anxiety, depression, and self-harm, are more vulnerable to exploitation, and are more at risk of exposure to dangerous or illicit material. Many of these harms result from social media’s core design features. Widespread and public image-sharing enables the social comparison pressures that drive so much of the mental health harms to kids. It also creates a permanent and public record that exposes children to predators and other risks.
Algorithmic recommendations of content and targeted advertising are designed to maximize user engagement and platform revenue. They rely on aggressive data collection, invasions of privacy that minors rarely realize they are allowing, and manipulation of their attention. Inappropriate, dangerous, and illicit content is frequently posted and left ungated on the platforms, from pornography and depictions of abuse to glorification of illicit drug use, eating disorders, and self-harm. Even when children are not looking for such content, algorithms regularly deliver it to them.
Social media companies have little incentive to rethink these design choices. After all, their business model depends on them.
What’s the Solution?
The United States should regulate children’s access to social media, taking the same steps to provide a safe environment online that we take for granted as necessary in the physical world. Congress should:
- Prohibit the public display of images uploaded by minors, and require accounts held by minors to be restricted to private settings only;
- Impose stiff, economically meaningful fines for each instance of displaying harmful content to minors; and
- Ban targeted advertising and restrict algorithmic content recommendation aimed at children.
Innovate to Protect
The most straightforward way to address social media’s dangerous features is to simply prohibit and penalize them. Social media platforms should ensure that minors’ accounts must operate on “private” settings that allow only approved family and friends access to the user’s content, especially images. The same indecency restrictions applied on the airwaves should apply to children accessing content online as well.
Policymakers need not dictate how to prevent the display of harmful content to children. The law simply needs to establish clear and economically meaningful consequences for failure. Silicon Valley prides itself on rapid innovation; with sufficiently stiff fines at stake for every failure, that innovation will be channeled quickly toward protecting kids. Finally, platforms should be prohibited from targeting advertising at children—which will help to protect privacy while reducing the incentive to attract and addict children to the platforms in the first place—as well as restricted from using algorithmic content recommendation.
Frequently Raised Objections
“Correlation does not equal causation. We don’t know that social media causes harm.”
Research shows the most likely explanation for the recent decline in mental health among children is the migration of social lives to social media. Social media’s own leaked internal research confirms the harm—and shows how children themselves attribute their anxiety and depression to social media.
“Private companies should not be told how they can and cannot design their products.”
The law sets parameters on private sector product design all the time. Drugs, food, and other consumable products must meet stringent safety standards. When it comes to protecting children, we are especially willing to impose regulatory burdens on the private sector. Banning the sale of alcohol and tobacco to minors is a “burden,” which surely slows innovation in the candy-flavored booze market. Good.
“We should use market incentives to encourage private solutions.”
The private sector had years to address this problem and chose not to. Social media executives openly admit that hooking children on their product is a core element of their business model. If social media is to be stopped from monetizing childhood, it will require policy intervention, because social media’s economic incentives push it to hook the next generation of “customers.” But we should really say “laborers” instead of “customers,” because social media’s users are not its customers. Advertisers and purchasers of user-generated data are social media’s real customers. Users are the ones who generate the value that social media sells to others.
Further Reading
Chris Griswold. “Protecting Children from Social Media.” National Affairs, 2022. A policy essay that provides solutions for how to minimize harm to children from social media.
Oren Cass. “Governing After a Revolution.” American Compass, 2021. An overview of the challenges associated with governing well in the digital age.
Jonathan Haidt. “The Dangerous Experiment on Teen Girls.” The Atlantic, 2021. An overview of evidence that social media is driving mental health harms for young people.
Yuval Levin. “How Changing One Law Could Protect Kids from Social Media.” The New York Times, 2022. An essay arguing for social media age restrictions.
“An Online Age-Verification System.” American Compass, 2022. A policy brief advocating for requiring robust age verification on social media platforms.
Kids Online Safety Act. Introduced by Senators Marsha Blackburn and Richard Blumenthal, 2023. A bipartisan bill to create safeguards for children on social media, impose new obligations on online platforms, and provide parents with new tools to protect their children online.
Protecting Kids on Social Media Act. Introduced by Senators Tom Cotton, Brian Schatz, Chris Murphy, and Katie Britt, 2023. A bipartisan bill to set a minimum age to use social media apps, require parental consent, and restrict use of algorithms to recommend content to minors.
Recommended Reading
Protecting Children from Social Media
American Compass policy director explores policy options to protect children online with the same vigor that we protect them in the real world.
Foreword: Governing After a Revolution
The biggest tech challenges for policymakers go far beyond “Big Tech.”
Policy Brief: An Online Age-Verification System
Congress should create a publicly provided online age verification system that would allow any person to privately and securely demonstrate their age online.