Not long ago, artificial intelligence was pitched as a quiet upgrade to daily life. It could draft emails, speed up research, and boost productivity with minimal disruption. The prevailing mood was optimism. That confidence has begun to erode.
AI is moving faster than expected, taking on white-collar tasks once thought insulated from automation. It now writes code, drafts legal documents, and handles customer service at scale. Alongside those gains has come unease, including fears about job losses, questions about control, and a growing sense that the technology is advancing faster than the rules meant to govern it.
That shift frames a warning from Anthropic, one of the leading U.S. artificial intelligence firms, and its chief executive, Dario Amodei. In a lengthy essay published this month, Amodei argues that humanity is entering a dangerous phase of AI development without the institutional maturity to handle it.
The problem, he writes, is not a single dramatic failure but a slow, systemic lag, as labor markets, political systems, and democratic norms struggle to keep pace with rapidly advancing technology. “We are entering a rite of passage,” Amodei writes, warning that “almost unimaginable power” is arriving before society is ready to wield it.
His warning comes as public concern over AI has intensified. In a 2025 Reuters/Ipsos poll, 71 percent of Americans said they were worried that AI could permanently put people out of work. A separate Marist survey found that two-thirds believe AI will eliminate more jobs than it creates.
Other polling shows similar anxiety among workers. About half of U.S. employees say they are concerned about how AI is affecting the workplace, with younger workers especially likely to believe the technology could reduce opportunities in their fields.
“Humanity needs to wake up,” Amodei writes.
A ‘Country of Geniuses’
Anthropic, founded by former OpenAI researchers and known for positioning itself as a safety-focused AI company, sits near the frontier of the industry’s rapid acceleration. Its flagship model, Claude, is already used by millions, and the company’s leadership has been among the most outspoken about the risks posed by increasingly autonomous systems.
At the center of Amodei’s argument is a thought experiment. He asks readers to imagine the sudden emergence of what he calls a “country of geniuses in a data center,” millions of AI systems operating simultaneously, each more capable than top human experts across fields such as science, engineering, and economics, and able to act autonomously at machine speed.
Such a system, he argues, would represent a concentration of intelligence unlike anything in human history, and one that existing institutions are poorly equipped to absorb.
That intelligence may soon move beyond screens. Researchers studying the convergence of AI and robotics say the shift into the physical economy is already underway. Adam Dorr, director of research at the think tank RethinkX, has warned that advanced AI systems are close to being deployed across transportation, manufacturing, and logistics.
“When people start seeing fully autonomous cars on the streets, no driver, no steering wheel, that’s when the public will understand how fast this is moving,” Dorr told Newsweek. “And once AI is in vehicles, it won’t stop there.”
Amodei argues that if current trends hold, the implications could be sweeping.
“If the exponential continues,” he writes, “it cannot possibly be more than a few years before AI is better than humans at essentially everything.”
Concerns about AI’s broader societal effects extend beyond employment. A September Pew Research survey found Americans divided on whether AI will improve skills and relationships, with many saying it could weaken creativity and social bonds. Fifty-seven percent rated AI’s societal risks as high, compared with 25 percent who rated its benefits as high.
Jobs, Displacement, and Speed
One of Amodei’s most immediate concerns is labor disruption. He predicts AI could displace as much as half of all entry-level white-collar jobs within one to five years, not through sudden mass layoffs, but by steadily absorbing the tasks that have traditionally defined junior roles.
Early signs suggest that process is already underway. A study published by MIT in late 2025 found that AI is capable of performing about 11.7 percent of all U.S. labor tasks, a shift that could save companies an estimated $1.2 trillion in wages across sectors such as finance, health care, and professional services.
“AI isn’t a substitute for specific human jobs but rather a general labor substitute for humans,” Amodei writes, warning that the breadth and speed of the transition could overwhelm labor markets before institutions have time to adapt.
“The transition could easily be faster than the ability of labor markets, education systems, and political institutions to adapt,” he adds, predicting “a period of severe economic and social instability.”
Amodei argues this moment differs from past technological shifts not just in scale but in speed. Previous industrial revolutions unfolded over decades, allowing societies time to adjust. AI, by contrast, is advancing on a compressed timeline measured in years.
That concern is echoed by labor researchers. Dorr has said policymakers often underestimate how quickly automation reshapes industries.
“It doesn’t take 50 or 100 years for industries to change,” he said. “It takes 15 to 20 years, sometimes even less.”
Surveys of U.S. employers suggest disruption may already be beginning. More than a third of companies using AI say they have started replacing some human workers. In 2025 alone, AI was linked to nearly 55,000 layoffs in the United States, according to a CNBC report citing the consulting firm Challenger, Gray & Christmas.
Power and Concentration
Beyond jobs, Amodei warns that powerful AI could accelerate the concentration of economic and political power.
“Democracy is ultimately backstopped by the idea that the population as a whole is necessary for the operation of the economy,” he writes. “If that economic leverage goes away, then the implicit social contract of democracy may stop working.”
Because advanced AI systems require vast computing resources, Amodei argues control could consolidate among a small number of companies and governments capable of building and operating massive data centers. That concentration, he says, could undermine democratic accountability and reshape global power balances.
Yet, Amodei rejects the idea that catastrophe is inevitable and cautions against what he calls “doomerism.” At the same time, he argues that relying on voluntary measures or market forces alone will not be enough.
“The technology itself doesn’t care about what is fashionable,” he writes, warning that political attention has swung away from AI risk even as capabilities continue to accelerate.
Rather than calling for a halt to development, Amodei advocates transparency requirements for frontier AI companies, targeted regulation tied to concrete evidence of risk, and export controls aimed at slowing the spread of advanced capabilities to authoritarian regimes.
“The most constructive thing we can do today,” he writes, “is advocate for limited rules while we learn whether or not there is evidence to support stronger ones.”