Social media’s unseemly interest in children, and legislators’ hesitance to confront it, resembles nothing quite so much as the dynamic surrounding the child labor practices of the 19th century. In the late 1800s, new technology emerging from the Industrial Revolution offered commercial interests a new opportunity to monetize childhood—and those commercial interests took it. At the time, there were about 765,000 American child laborers, many working in extremely unsafe industrial conditions. Children suffered numerous harms, from crushed limbs and broken bones to death by industrial accidents like factory fires.
Big Tech’s social media platforms are similarly exploiting children today. And just as policymakers needed to act to protect children then, they must do the same now. The Digital Revolution has created new ways to exploit children for profit, and Big Tech has seized the opportunity enthusiastically. Seven out of ten American teens are on Instagram, and almost half are “almost constantly” online. Despite laws that supposedly restrict social media use by kids under 13, almost 40% of kids under that age use Instagram every day. Social media is where American young people now live their social lives. This is exactly how Silicon Valley likes it. As I argued in the Spring 2022 issue of National Affairs, it’s perfectly clear why social media wants kids:
Luring children onto social media—and keeping them there—is a top priority for online platforms. This is because, like all social media users, children are not so much the customer as they are the labor. The platforms induce them to produce the content that engages other children and that, in the cycle of virtual affirmation these companies deliberately engineer, drives them to keep on engaging and producing. The sale of this captured attention to advertisers is big business—the industry’s advertising revenue was projected to reach over $56 billion in 2021. Each platform’s success relies on attracting and retaining a critical mass of such producers. When it comes to social media’s economic imperatives, nothing could be a more vital strategic priority than recruiting and retaining the youngest users.
Big Tech executives themselves have repeatedly confirmed that this is indeed their motivation—and denied any responsibility for the consequences. Facebook spokesperson Dani Lever put it bluntly: “We have no commercial or moral incentive to do anything other than give the maximum number of people as much of a positive experience as possible.” Even a teenage “user” base (that is, labor supply) isn’t young enough. Internal documents leaked from Facebook in 2021 make this clear: “Our ultimate goal is messaging primacy with U.S. tweens [i.e., ages 10-12], which may also lead to winning with teens.”
Researchers have said they cannot find a better or more plausible explanation for the exploding mental and emotional health crisis in American kids. Self-harm by girls ages 10 to 14, as measured by hospital admission rates, doubled between 2010 and 2014—the same period in which teenage social life moved online in large numbers. Instagram’s own internal research, leaked by the Wall Street Journal and extensively investigated by Congress, reveals the immense damage that social media causes children. As I summarize in National Affairs:
The researchers asked a group of teenagers who had experienced one of several mental- or emotional-health challenges in the past month whether a given issue had started “on Instagram.” An astounding 42% reported their feelings of not having enough money started on the platform (perhaps an indication of the commercial impulses animating what kids are shown on social media). Similarly, 41% reported that their feelings of not being attractive began on Instagram, 39% said the same regarding pressure to have a perfect image, and one in four indicated as much regarding their sense of not being good enough. A tenth reported that their depression began on Instagram, 9% that their desire to self-harm did, and 6% that their desire to kill themselves did. In commentary attempting to mitigate these statistics, Facebook clarified an error in a related PowerPoint slide: “The estimates for ‘wanted to hurt themselves’ and ‘wanted to kill themselves’ should be flipped.”
And as in the prior era of child labor, industry interests are crassly spinning the issue to persuade policymakers that acting would somehow be against their principles. Banning child labor was called an unacceptable intrusion into the home, contrary to children’s own interests, and an infringement on the prerogatives of the private sector. But after many decades of struggle, policymakers came to realize that the protection of childhood was the more important value—and that in elevating that principle to its proper place and prioritizing the protection of children, other principles were better served: the free market made healthier, and the family made stronger.
The forthcoming Senate Commerce Committee consideration of the Kids Online Safety Act is an important opportunity for Congress to confront the exploitation of children as it should: not by weighing children’s welfare against abstract claims of market efficiency or innovation, but by starting from the non-negotiable starting point that the protection of children comes first. The bill is the product of bipartisan effort between Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT), whose partnership in holding Big Tech accountable has resulted in numerous hearings exposing the dangers of social media.
The bill contains numerous measures worthy of support. It directs the federal government to study technologically feasible ways to verify age online while preserving anonymity and privacy, which is an essential problem to solve if we are to have any hope of governing the digital age effectively and wisely. It experiments with new parental tools that can alter the functions and features of social media accounts for children, which reflects the understanding that the harms of social media aren’t just due to how much time a child spends online, but to how social media is designed. It imposes a legal duty of care on social media platforms—a reasonable standard, given that we entrust America’s children into their keeping for hours every day. And it sensibly applies its measures to children 16 and younger, which is a great improvement over the Children’s Online Privacy Protection Act (COPPA)’s current and toothless age threshold of 13.
These measures are pragmatic, implementable, and eminently reasonable. If Congress wishes to hold Big Tech accountable for its harmful impact on children—an impact that its own hearings have roundly confirmed—this is a good place to start. Parents are worried for their children, and they need help. Even the best parents often find themselves outmatched by social media’s deliberately addictive design, and by the network effects that result from all their child’s friends being unceasingly online. As with tobacco and alcohol, the right answer is for policy to back parents up in sensible ways. We expect parents to protect their kids from harmful substances, for example, but we also support them in that task by making those substances illegal. Policy intervention is needed to support parents in protecting their kids from social media, too.
In 1906, Republican Indiana Senator Albert Beveridge—one of the champions in the fight to ban exploitative child labor—starkly outlined the stakes: “We cannot permit any man or corporation to stunt the bodies, minds, and souls of American children. We cannot thus wreck the future of the American republic.” The same challenge is facing Congress now: to protect America’s children from Big Tech firms content to harm them for money. Let’s not leave children unprotected for decades before answering the call this time around.
American Compass policy director explores policy options to protect children online with the same vigor that we protect them in the real world.
American Compass policy director Chris Griswold discusses the historical parallels between child labor in the 19th century and kids’ use of social media today, and suggests steps that policymakers can take to protect them from its harms.
The early years of a technological revolution are not, generally speaking, happy ones.