Grace Huckins, a science journalist based in San Francisco, recently wrote a piece for MIT Technology Review titled, “Are we ready to hand AI agents the keys?”
Her article opens with a description of the “Flash Crash” of 2010, where nearly a trillion dollars evaporated from the US stock market within 20 minutes—at the time, the fastest decline in history.
Though prices rebounded quickly, regulators attributed much of the responsibility for this unprecedented event to high-frequency trading algorithms acting autonomously.
These automated systems, though not the direct cause of the crash, accelerated the downturn significantly because they reacted instantaneously to falling prices by selling off assets, which then caused prices to fall even faster.
Using today’s terminology, we would call these systems “agentic,” since they were acting autonomously. (Which is what they were designed to do.)
Huckins uses the example to illustrate the substantial risks associated with agentic AI, which only seem to grow more real and concerning by the day.
As LLMs get bigger, faster, and more proficient, AI agents are becoming increasingly sophisticated, capable not just of routine tasks but also of navigating complex, multi-step processes with minimal human oversight.
But as Huckins’s article emphasizes, giving these agents more autonomy also means relinquishing control, potentially opening doors to unintended consequences.
Given the speed at which AI agents are advancing, the question I keep coming back to is this: Are we truly ready for widespread adoption?
I first wrote about AI agents about a year ago, highlighting their emerging capabilities and the potential to help businesses transform how they do work.
At that time, agentic systems were just beginning to show promise, leveraging the enhanced reasoning abilities of LLMs to perform tasks previously requiring extensive human intervention.
However, even then, it was clear that their adoption, if we even got there, would come with both significant upside and substantial downside.
Truthfully, the potential advantages of agentic AI systems are even clearer today than they were a year ago.
Businesses across sectors could see substantial productivity gains by offloading repetitive and time-consuming tasks to autonomous agents.
Imagine AI systems that seamlessly manage supply chains, optimize marketing campaigns in real-time, or streamline complex workflows such as travel planning and customer relationship management.
These capabilities promise not just efficiency, but the freeing up of human talent to focus on higher-value, strategic activities.
Yet, the red flags are everywhere.
From cybersecurity vulnerabilities like prompt injections to broader societal and ethical challenges—including transparency, accountability, and the potential concentration of power—the risks are piling up. (Read Huckins’s piece for a few more examples that will blow your mind.)
To put it simply: the inherent unpredictability of autonomous agents is a real problem.
Ultimately, while the excitement around AI agents is understandable, we find ourselves in a crucial “wait and see” moment.
It’s not a question of if AI agents will become deeply embedded in our businesses and lives, but when—and under what conditions.
Leaders must approach this moment thoughtfully, experimenting responsibly, and preparing carefully for the substantial shifts ahead.