Tell Americans that AI might take their jobs in two years, and they’ll shrug. Tell them it might happen in 36 years, and they’ll shrug a bit less. Either way, they’re not losing sleep over it.
That’s the surprising takeaway from a new study on how people respond to predictions about artificial intelligence upending the workforce. Even when researchers warned survey participants that transformative AI could arrive as early as 2026, potentially automating jobs from nursing to software engineering, most people didn’t change their minds about when automation would actually affect them or what the government should do about it.
Political scientists Anil Menon of the University of California, Merced, and Baobao Zhang of Syracuse University surveyed 2,440 U.S. adults in March 2024, presenting them with varying timelines for when human-level AI might emerge. Some read forecasts predicting breakthroughs by 2026. Others saw predictions stretching to 2030 or 2060. A control group received no timeline at all.
The researchers expected that shorter timelines would light a fire under people, spurring demands for retraining programs or universal basic income. Instead, they found what they call stubborn beliefs. People acknowledged automation might come a bit sooner than they’d thought, but their support for policy responses barely budged.
The Credibility Problem
Interestingly, the longest timeline, 2060, actually generated more worry about job loss within the next decade than the 2026 forecast did. The researchers suspect that predictions of imminent AI takeover struck many people as less believable than more distant, measured forecasts. It’s one thing to hear that your job might vanish in 36 years. It’s another to be told it could happen next year, especially when you look around and see the same workplace you’ve always known.
The study arrives at a moment when tech leaders are making increasingly bold claims about AI’s trajectory. Some predict human-level artificial intelligence within this decade, while critics argue those forecasts wildly overestimate what current systems can actually do. Large language models like ChatGPT can write essays and generate images, but they still can’t reliably perform many tasks humans handle without thinking.
“These results suggest that Americans’ beliefs about automation risks are stubborn. Even when told that human-level AI could arrive within just a few years, people don’t dramatically revise their expectations or demand new policies.”
Participants in the study read vignettes describing experts predicting that advances in machine learning and robotics could replace workers across a sweeping range of professions: software engineers, legal clerks, teachers, nurses. After reading, they estimated when their own jobs and others’ jobs would be automated, rated their worry about job loss, and indicated support for various policy responses, from limits on automation to increased AI research funding.
Why the Disconnect?
The findings challenge a core assumption in public policy debates: that making threats feel more immediate will mobilize people to act. The research draws on construal level theory, which examines how our sense of time shapes risk perception. In this case, temporal proximity didn’t translate into urgency.
Menon and Zhang note several limitations. Their single survey can’t track how individuals’ views might shift over months or years of exposure to AI developments. They also didn’t test whether the credibility of the forecasters or the specific trade-offs of automation, like economic gains versus job losses, might influence attitudes differently than timeline information alone.
Still, the study offers a useful snapshot of public sentiment at a pivotal moment. Policymakers hoping to gauge when citizens will support interventions like retraining programs or universal income proposals may find that timing warnings alone won’t do the trick. The researchers suggest future work could use multi-wave panels to track attitude changes or examine reactions to specific AI systems rather than abstract forecasts.
“The public’s expectations about automation appear remarkably stable. Understanding why they are so resistant to change is crucial for anticipating how societies will navigate the labor disruptions of the AI era.”
For now, Americans seem to be taking a wait-and-see approach, even as the AI systems making headlines grow more capable. Whether that reflects informed skepticism or dangerous complacency remains an open question.
The Journal of Politics: 10.1086/739200
If our reporting has informed or inspired you, please consider making a donation. Every contribution, no matter the size, empowers us to continue delivering accurate, engaging, and trustworthy science and medical news. Independent journalism requires time, effort, and resources—your support ensures we can keep uncovering the stories that matter most to you.
Join us in making knowledge accessible and impactful. Thank you for standing with us!

























































