Experts consider the labor-market implications of the other GPT: general purpose technology.

More from this collection
Foreword: Productivity, Power, and Purpose
What AI Might Mean For Workers: A Discussion
On Implementation and Innovation: A Conversation about Organized Labor and New Technology
Mutual Disadvantage
An Industrious Workforce for the AI Decade

Participants

Oren Cass, Chief Economist, American Compass
Jack Clark, Co-Founder, Anthropic
Chris Griswold, Policy Director, American Compass
Matt Sigelman, President, Burning Glass Institute

Oren Cass: Embracing past as prologue, I thought it would be helpful to start with a look at what the labor market experienced with the last major wave of technological change, led by computers and the Internet. Some people argue that this was “skills-biased” change that benefited one class of workers while harming another, but that doesn’t show up in a lot of the productivity data, where it’s exactly for those lower- to middle-skilled workers that productivity growth seems to have stagnated. In manufacturing, productivity has been declining. How do you interpret what has been happening? 

Matt Sigelman: The question of skill-biased change is one of divergence. However, the mechanism of divergence may have less to do with the rewards to financial capital than with the rewards to human capital. Specifically, divergence happens in a few ways: Workers with high levels of engagement with technology are better able to use new applications and, in the broadest sense, people benefit most who are in roles that require higher levels of skill. This could be because these workers are able to apply technology in their roles more easily, or because they may be able to extend their working years, or because recent technological changes have made them more productive individually. This is especially true in industries where it is difficult to capture standardized output and instead individual performance drives success, such as finance, law, consulting, and tech.

It’s worth noting that the types of roles that benefit from skills-biased change are in every industry. For example, the engineers in manufacturing have benefited more than the production associates. It is important to consider an occupational lens to broader output statistics, though this is hard to measure through the generally available economic and labor statistics from the BLS and BEA. These aggregates mask what is happening to different classes of workers and so they cause homogenization.  

Jack Clark: Generally, it feels as though productivity growth shows up for a subset of outperforming companies and/or individuals in each industry; certain people and certain firms tend to be aggressive and early adopters of technology and to benefit from it. There’s also a confounding aspect which is that investing to increase productivity is different for ephemeral software-based stuff versus physical, traditional capex. If I need to try out some new software to see if it can improve my efficiency, that’s a lot lower cost than needing to either refit or junk and build out a new roboticized infrastructure. Again, some outlier firms are willing to invest ahead of the returns being easy to see, which leads to them outcompeting others. 

Chris Griswold: My sense is that productivity requires pressure, not just technological change per se, and we have been failing to apply the right kinds of pressure—whether from workers themselves, tight labor markets, or other forces that make productivity gains an imperative for firms. Big picture, you don’t actually need to improve productivity to maintain or grow profits; to Jack’s point, especially in capital-intensive industries, that’s often the hardest thing to do.

Cass: Have we seen the gains from digital technology accrue primarily within the IT and knowledge sectors and related roles or have they been diffused broadly? If they haven’t been diffused broadly, is that inherent to the kind of technology, or a failure of imagination or entrepreneurship?

Clark: When we zoom out and look at what has been going on in the economy in recent years, we see the emergence of extremely fast-moving technology-centered companies that outcompete less-technology-leveraged entities and that seem to be expanding more quickly into many more “economic niches” in the economy. If you look at the stock market, the divergence between the “Magnificent 7” and the aggregate S&P 500 seems to support this view: extremely tech-centered companies have soaked up a ton of capital, and if you drill into the revenues of these companies you see they’ve been able to build non-trivial new sources of revenue as a consequence of their earlier tech investments and their agility (e.g., the emergence of cloud computing as a line of business, which has driven multiple tens of billions of incremental revenue for these firms).

Some of these gains do diffuse more broadly into the economy, but often without an obvious measurable productivity boost or economic benefit attributable to them. Google makes a ton of money, but billions of people benefit from free Google Maps and Gmail and Google Docs in ways that are inherently hard to measure. 

Sigelman: Revolutions like the Internet or AI are general purpose technologies—the kinds of inventions, like electricity, with pervasive rather than specific applications. At the advent of the Internet era, few people thought about its application as a new way to consume entertainment, to book travel, or to buy groceries. But a general purpose technology can still have acute effects on specific workers. While the Internet is easily understood as a general purpose technology, travel agents may not feel that way. (Interestingly, while the number of travel agents in America fell 70% between 2000 and 2020, there’s been a surprising rebound since, with the travel agent workforce now down only 30% since the start of the dot-com boom.)  

By its nature as an instrument of communication and knowledge retrieval, the Internet has been the most transformative to knowledge-economy occupations. Whether you are in marketing or project management, much of what you do today wasn’t possible before the digital and Internet revolutions. While the Internet has had less impact on the productivity of plumbers, it has nonetheless been a meaningful instrument for advancing workers across the socioeconomic spectrum. Despite midwifing entirely new professional occupations like social media strategists or data scientists, the biggest sets of jobs created by the Internet have been for warehouse workers and drivers.

Cass: I’m struck by the contrast between Jack’s view of technological change as highly firm-specific and Matt’s focus on the economy-wide exogenous effect. Are these two different stories? The same story at different time scales and we’re caught in the middle right now?

Clark: These don’t feel inconsistent to me: more a question of ordering. Right now, younger people are adopting modern AI technology very broadly and deeply, just as they’ve done with prior technologies (phones, computers, the internet, etc.). But I think it takes a while for this to show up at the industry level because general change takes a long, long time. Firms need to change their baseline assumptions of how they spend on technology and that requires either extremely opinionated leaders with significant company control (hence why I think some tech change shows up as firm-specific initially), or a general body of workers that have a certain way of using tech, which takes longer than the top-down process and also requires these workers to be far enough along in their careers to actually change corporate decisionmaking—gigantic companies don’t tend to pivot their IT strategy based on what their new graduate intake class thinks.

Cass: What other lessons, if any, should we draw from the experience of the past generation for what we can expect to see from AI? Do the same lessons extend forward, are there comparable lessons but along different vectors, or is “this time different”?

Griswold: I’m not (yet) convinced that this time is different, because America’s basic economic philosophy has not appreciably changed. Pace of adoption is one thing, but what the technology is being adopted for strikes me as the truly important question. If it is adopted to boost profits by reducing the need for labor, that’s an entirely different scenario than pursuing productivity gains to increase output. Which of those two things we do depends less on the technology and more on the economic incentives in play. Do firms see an irresistible opportunity to raise productivity and supercharge innovation, or do they feel pressure from Wall Street to slash headcount as a self-evident demonstration of “efficiency”?

This implicates all manner of policy questions, of course, but the one that comes most immediately to mind is whether workers have a direct voice in the workplace. When they do, a firm tends to focus more on investment and productivity; when they don’t, it tends to offshore and downsize. So the most important transformation to learn from might not be a recent technological change so much as the change wrought by globalization. 

Sigelman: A key difference with AI is that it is more oriented toward replication of human tasks. Travel agents and call-center representatives notwithstanding, the Internet hasn’t been a major driver of displacement, even in the realm of low-level professional jobs for which the Internet facilitated an offshoring boom. By contrast, AI has the potential to drive worker displacement at a significantly greater scale. Demographic pressures and limited immigration will keep the talent for many manual and in-person jobs relatively scarce, so their wages should keep up better with productivity. By contrast, the supply of college graduates keeps rising at the same time that AI is automating many routine professional tasks, creating a glut that will hold down pay.

Clark: The recent Anthropic Economic Index report says two things: First, “this time is similar” in terms of effect—geographically concentrated and specialized deployments with a minor emphasis on automation are beginning to shift to geographically distributed, less specialized deployments, with a major emphasis on automation. But also, second, “this time is different” in that the first thing is happening very, very quickly. Adoption rates of AI seem to have hit in two years what the Internet reached in five. I’d expect this means the world is going to feel quite different quite soon, in the same way that social media changed the world more palpably and generally than the Internet did—in part because social media piggybacked on the Internet and smartphones for distribution. AI gets to piggyback on social media and the Internet and smartphones for distribution, so it moves a lot more quickly. 

As the adage goes, “quantity has a quality all of its own.” The fact that AI is becoming omnipresent very, very quickly may matter a lot for the shape and feel of the economy we exist in. One anecdotal example I’d give is how quickly I’ve seen social media (e.g., Facebook, Instagram, X) get filled up with synthetic images and then, even more rapidly, synthetic video, and how some of this is driving some commercial activity (merch, new media universes, etc.). We’ve gone from Hieronymus Bosch style ‘will smith eating spaghetti’ in 2023 to photorealistic and coherent ‘will smith eating spaghetti’ in 2025! 

Cass: Time does seem to be of the essence in assessing labor market effects and risks versus rewards. One of the lessons from globalization is that when change happens too quickly, even if the result is most “efficient,” the effects on those impacted and unable to adapt quickly can be devastating. Some economists have reflected that, even if globalization were ultimately a good idea, it should have been adopted more gradually to allow time for market adjustment. 

Do you think a similar case can apply to technology? For the most part it doesn’t seem to have been the case in the digital revolution, but if AI has the potential for very rapid adoption and large-scale displacement, would it ever be a good idea to use regulation to slow that?

Clark: The thing that feels most challenging about the rise of AI is not just the pace of diffusion (extraordinarily fast) but how it is diffusing in every direction at once. AI companies can exercise a small amount of choice in where they “push” the tech (e.g., companies are explicitly trying to make their systems better at doing formal math proofs) but this comes against a rising tide of generic capability improvement which is both broad and, increasingly, deep. AI companies can’t “steer” their AI systems into the economy because the systems aren’t specific tools for specific ecological niches, but rather adaptable platforms which can be end-user modified for an arbitrary range of tasks. 

This is in large part the inspiration for publishing the Economic Index and tying it to O-Net job classifications. This stuff is clearly moving quickly, and we want to generate the sort of telemetry that lets us see where it is showing up in the economy. So far, the results seem to correlate to the simplest interpretation of things, which is that “AI diffuses where it is most economically valuable and easy to take the outputs and convert them into economic activity.” So coding has moved very quickly because (a) it’s valuable, and (b) the native outputs of the AI systems (workable code) can be employed without much friction. Other major use-cases are writing, which has similar properties. By comparison, we see less penetration in parts of the economy where there’s a larger translational element. In gardening, AI is less valuable and also the AI giving good advice doesn’t seamlessly do anything. A person needs to move some dirt around.

Sigelman: The challenge with trade and technological change, especially general purpose technologies, is that the benefits are wide and important, but relatively small for any individual, while the costs are severe for the workers affected. It upends their lives. A small number of workers and families bear significant costs so that we are all better off.  

Economists would argue that we need to pursue these changes. They would say that we should adopt them as rapidly as possible and pursue global trade that creates surpluses across the economy, and with these surpluses, we can compensate people for the losses they suffer. That is a nice academic theory, but it hasn’t worked out well in practice. Long-term wage insurance, rather than unemployment insurance, may be a better way to mitigate the blow to workers who have to take lower paying work after a transition, but none of this has a track record we can point to.  

Griswold: Matt is right that the “generate and then redistribute surpluses” approach has never really been implemented, but I don’t see how it could ever work anyway. It entirely ignores the role that productive work plays in a person’s sense of dignity and in a society’s civic health. This is why I am so skeptical of the more fantastical views that AI will lead to a post-work society. The dream of “fully automated luxury communism” is not new, and it’s as foolish now as it ever was. Work will always matter.

Clark: On the regulation question, we need telemetry to look at the parts of the economy that are getting hit and then we need to answer the question of regulation. My sense is that highly trained computer programmers (who may be facing economic disruption from coding tools) are going to be able to move laterally into other parts of the economy or significantly up-level themselves with AI. But it’s not clear to me the same is true of, for example, copywriters, where it may be harder to move laterally into other parts of the economy, and the benefits of up-leveling are reduced. The more data we have here the more we get to see the full picture, and then that’s going to allow us as a society to decide if it’s socially harmonious or discordant to slow (or speed!) the rate of adoption in different professions. 

My biggest worry regarding policy interventions beyond “just have better data so we know what is going on in the economy” is that government control over how and when industries adopt technology can often introduce regrettable frictions. The more you introduce policy-based ‘brakes’ into tech adoption, the more these tend to interlink with other regulations, creating a thicket of stuff that is subsequently very hard to unwind. Seemingly everyone regrets the regulatory infrastructure we’ve arrived at for the nuclear power industry. 

Sigelman: Practically speaking, calls to regulate, or slow down, AI are premised on an industrial-era understanding of automation that doesn’t apply well to the twenty-first century knowledge economy. Our mental model here is still that of John Henry versus the steam drill: an employer buys a machine and suddenly the worker is out of a job. But AI doesn’t automate away jobs. It automates tasks. Whether that opens time to take on more valuable tasks, whether new efficiencies unlock latent demand that actually grows opportunity, or whether employers decide to take the savings depends on a range of factors and plays out over time.

Now suppose for a moment that we could somehow slow this down. This isn’t 1946, when America had the only intact industrial economy in the world and could dictate terms of global commerce. It didn’t take long after the release of ChatGPT for China to launch DeepSeek.  Slowing the implementation of the technology would simply put American industry at a disadvantage relative to foreign firms at a time of growing economic competition, with significant consequences for the American worker.  

Cass: Well instead of regulation, then, let’s talk about what Chris clearly wants to talk about: power. As part of this project, we interviewed Sean O’Brien, the Teamsters president, about how organized labor and representatives of workers should approach these issues. The Teamsters have been outspoken in their concerns about autonomous vehicles but, as Sean notes, the Teamsters logo is a horse, because the Teamsters got their start managing teams of horses. They know as well as anyone that technological change is inevitable and can be positive.

O’Brien argues that workers have the right to demand their concerns be taken into consideration, that they be included in the conversation, and that provision be made to ensure they gain from new technology, as conditions of their support for its adoption. What role do you see worker power playing in these discussions and when is it positive versus negative?

Griswold: Well that certainly is how it worked in the postwar period: worker power and technological dynamism drove each other. The classic picture of a labor union just standing in the way of progress (an image that goes all the way back to the Luddites, I suppose) is embedded in the popular mind—and there’s truth there, to be sure. But it assumes a kind of zero-sum thinking. If workers are one factor of production among many, in competition with other inputs, it seems rational to play it that way. But that’s not what we saw during the mid-century period, broadly speaking. If both workers and capital stand to gain from productivity improvements, and are in productive tension with each other, we tend to get rising wages and dynamic innovation at the same time.

Sigelman: There’s an opportunity in this moment to reframe the interchange between workers and their employers and between unions and companies from being a negotiation over wages and benefits to a relationship that considers a broader view of worker wellbeing. As part of our American Opportunity Index project—a collaboration with the Schultz Family Foundation and Harvard Business School— we surveyed 1,000 workers. There’s no doubt that pay is what matters most to workers, and with good reason. But having real opportunities for advancement matters almost as much. Workers are smart: in making decisions about which job to take or whether to stay in a role, they consider not only how much they’ll make right away, but also how much more money they can make over time. Anxiety over AI-driven displacement provides an opportunity to employers to have an open conversation with their workers. Instead of pretending that everything’s alright until the pink slips arrive, employers should be flagging to their workers where they anticipate displacement and then working with them to map the paths to new opportunities—and to provide the training that’s required to get there. 

To Chris’s broader point about power, underlying the question seems to be a set of assumptions about wealth concentrating with the holders of capital—in this case, the data and platforms that drive AI. But in the knowledge economy where AI impacts are likely to be felt most strongly, knowledge itself is a potent form of capital. And, apart from the specific legal considerations of IP, knowledge capital generally rests with those who have accrued it. In my recent paper with Joe Fuller and Mike Fenlon at Harvard, we find that AI is likely to cause what we call an expertise upheaval, with experience and skill proficiency becoming increasingly valuable in a wide array of fields. We can already see meaningful empirical evidence of this, with employers shifting preferences away from entry-level hiring and toward people with greater experience. Those who hold that knowledge capital will hold a growing level of power relative to their employers, creating a seller’s market.

Clark: The key question here, as with so many things touching on AI, is about time. If AI moves incredibly quickly and starts to diffuse very quickly into certain ecological niches, then workers will rightly want some notion of AI-firm partnership in deployment and government partnership in the ensuing transition. Some of this comes down to understandable questions of agency. No one wants to feel like the fate of their livelihood is something happening to them due to an exogenous shock. Everyone would prefer to feel like the fate of their livelihood is in their own hands. 

This suggests to me that AI firms have a responsibility to produce the best possible data on where their systems are going into the economy. If things are moving very quickly, then you may want to ask AI firms questions about the extent to which they can tilt the mix between augmentation and automation in how the systems show up in the world. (Though “short timeline AI people” might say this isn’t a hugely viable dial to turn, or at least not for long.) But ultimately you need to ask the government about targeted social welfare interventions, though those are prone to issues just like the regulatory interventions we were discussing.

Griswold: Jack makes a great point about time, which argues for more creative mechanisms of worker voice that reduces the communication lag between investors, management, and labor while increasing trust between them. I’ve been impressed by research suggesting that workers on corporate boards push their firms towards larger and smarter investments, for example. Research on employee ownership models shows the same thing: workers interested in finding ways to improve productivity. Smarter workforce development is also a critical question here. Our current workforce development system is completely misaligned with the pace of technology adoption and diffusion. Embedding workforce development more effectively in the tech sector will be critical to helping workers ride these waves. 

Cass: We’ve been talking mostly about how workers might be affected negatively and how to insulate them from or compensate them for those impacts. The flipside is that technological progress is probably the best way to enhance productivity over time and, in the “upside case,” continued diffusion of digital technology and introduction of AI could accrue to workers’ benefit. Jack, you just mentioned asking AI firms about tilting the mix. Should we want them to tilt that mix as far as possible? Under what conditions do you think that would be desirable or plausible and what can business leaders and/or policymakers be doing to promote them?

Clark: We’ve gone back and forth inside the company about how effectively or how far you can tilt this mix. If we zoom out to the broader economy, the bull case is one where basically everyone becomes radically more productive and therefore drives a ton of economic activity, but the consequence is employment of people becomes heavily concerned with how effectively those people can describe and delegate tasks. Generally, this would be pretty great, but figuring out how to gain the relevant skills is actually very subtle and, today, typically the consequence of having some number of years of experience in the workforce. Teaching new entrants in the workforce how to do this is somewhat challenging. The bear case is that we’ll see a bunch of individuals and a bunch of firms across the economy become “apex predators,” moving extraordinarily quickly as economic actors because they’ve effectively utilized AI, and outcompeting people and firms who haven’t done this.

My “reasonable centrist” hope is that we can find a way to walk this tightrope, where we intentionally speed up adoption in industries that have poor productivity and therefore fewer economic opportunities today, and we keep an extremely close eye on industries that might be extremely rapid tech adopters and likely therefore the ones to face employment disruption that is more about efficiency gains from already optimized firms eating other firms.

An extremely load-bearing bit of data is, “how seamlessly and by how much does a dollar spent on AI turn into revenue?” If the answer is that for every one dollar I spend on AI I get $5 of revenue, and there’s some schlep to get it involving lots of people doing stuff, then we’re going to be in a relatively normal economy, albeit perhaps a slightly faster moving one. If the answer is (and this is what lots of AI researchers, who are not economists, expect) that for every one dollar I spend on AI, I get $100 in revenue, and there’s almost no humans required to get it, then you’re in a totally changed economic world. That’s when we might want to think about either taxing automation or subsidizing those displaced workers.

Griswold: It matters a lot how much control we have over how much AI is augmenting versus automating/replacing workers. And the “we” here includes the AI labs, the firms deploying the technology, the workers, and policymakers. There is nothing inevitable about how this plays out, in my mind. We started with the question about lessons from history, and that seems like a big one—that while we certainly can’t stop technological change, nor should we try, how it lands on the public really is up to us. 

Sigelman: I am not sure I agree that automation and augmentation are opposing forces. In our analysis of labor market data, we’re finding that the jobs that exhibit the greatest automation effects are also the ones that are experiencing the greatest augmentation effects. Or put differently, there will be specific ways in which AI makes us both more efficient and more effective at the same time. How this plays out in any given role will depend on the specific use cases toward which AI is deployed. That’s not as much in the control of the AI labs as it is in the control of employers in determining which use cases to pursue.  

Griswold: This takes me back to the prior point about workforce training as well. Whether AI is creating new useful tasks as fast as it eliminates others does seem like a matter of how it’s deployed. Whether workers can take up those new tasks with commensurate swiftness will depend in part on whether we can fix our broken workforce system. Plenty of good policy options are already out there (including, immodestly, American Compass’s) that clear out a lot of the detritus and red tape of our current system and shorten the distance between employers, trainees, and the ever-rapidly-evolving skills the market needs. 

Sigelman: There are good indications of what this might look like. First, we need an accessible infrastructure. Community colleges are the logical candidate, but their funding today biases them heavily toward producing transfer degrees instead of providing the training workers need. We should also structure training programs to build on the skills workers already have instead of training them up from scratch. We have seen great promise with leveraging skill adjacencies to create shorter and more effective training paths in work we have been doing together with the Greater Houston Partnership and Valencia College in Orlando. Workforce training also needs to provide ongoing support instead of looking at initial placement as the end goal. The reality is that for those displaced, their immediate need is for what I would call a lifeboat job. That often involves a step down. It’s there that people get stuck. We need to assure the support that helps people recover from there.

Oren Cass
Oren Cass is chief economist at American Compass.
@oren_cass
Jack Clark
Jack Clark is the co-founder of Anthropic.
Chris Griswold
Chris Griswold is the policy director at American Compass.
@Chris_Griz
Matt Sigelman
Matt Sigelman is CEO of Burning Glass Technologies, a leading labor market analytics firm.
More from this collection
Foreword: Productivity, Power, and Purpose

Are high wages and worker power in conflict with technological innovation and industrial strength? That depends on how we understand the purpose of American capitalism.

What AI Might Mean For Workers: A Discussion

Experts consider the labor-market implications of the other GPT: general purpose technology.

On Implementation and Innovation: A Conversation about Organized Labor and New Technology

The general president of the International Brotherhood of Teamsters joins American Compass to talk about how labor and tech can work together.