A useful lesson in AI capabilities, and the misunderstandings thereof
RECOMMENDED READING
Last week, for the first time, but presumably not the last, I received an email asking me to comment on a policy proposal developed by an AI model. The proposal—in this instance for a mind-numbingly complex tariff scheme that required precise calculations of industry-by-industry, country-by-country market distortions and distribution of all revenue as offsetting subsidies—is beside the point. That someone thought the exercise a useful one, though, is fascinating.
Just as an example, here is one particularly, er, incisive bit of analysis:
Offering a tariff adjustment for shipping costs sends a diplomatic signal. It shows that the policy is data-driven, rational, and even-handed. This may help mitigate resentment or accusations of outright protectionism from China. While the Chinese government may still oppose the tariffs, Chinese citizens and businesses might view the policy as more legitimate if it’s structured to neutralize only artificial competitive edges rather than exploiting geographic realities to place additional burdens on foreign goods.
If only we make the policy technocratic enough, Chinese citizens will appreciate its fairness instead of being upset by its effect of harming them economically, and thus the Chinese Communist Party will be less likely to retaliate even though rationally they should retaliate. The entire conversation smacks of this same imitation of technical rigor lacking any grasp of real-world political economy. Frankly, my attempts to experiment with the technology always encounter this same shortcoming.
What this experience made me realize, though, is that the people most obsessed with AI’s potential, whether for good or evil, seem not to be aware that such a distinction exists, or matters. I flipped over to the OpenAI website to see how it was advertising its latest “o1” model. “These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math,” it announced. A short video shows how the model “solves a complex logic puzzle” (an abstract and arbitrary math problem). The model, according to OpenAI, “excels at STEM, especially math and coding—nearly matching the performance of OpenAI o1 on evaluation benchmarks such as AIME and Codeforces.”
In other words, it does math and writes code. And don’t get me wrong, that’s cool. Is it a bigger leap forward than the computer, or MATLAB, or the modern programming language? Maybe, sure, I don’t know, we’ll see. But it’s important to understand that the people claiming a forthcoming, broad-based replacement of human judgment and reasoning have a very good track record of not understanding what human judgment and reasoning really are.
Recommended Reading
Hmm, Well, Yes, Expanding Government-Funded Addiction Treatment Does Count As GDP Growth
And more from this week…
Checking Corporate Power with FTC Chair Lina Khan
FTC Chair Lina Khan joins Oren for a wide-ranging conversation about corporate power and how best to rein it in.
Trump’s Agencies After Chevron
Overruling Chevron limits agencies’ ability to decide questions of law, but not questions of policy