In the race to turn artificial intelligence from promise into profit, a new word has added itself to the vocabulary: “agentic” AI. What does it mean? ChatGPT answers the question. The key agentic traits, which ChatGPT seems happy to admit will render it obsolete, are: -
Goal-directed behavior – pursuing objectives rather than individual tasks. -
Autonomy – ability to act without continuous human input. -
Planning and reasoning – ability to form and execute multi-step plans. -
Environment interaction – taking actions that affect the digital or physical world (e.g., sending emails, trading stocks, controlling robots).
While the profit margins currently being generated by AI-fueled stocks still seem to justify their lofty valuations, fundamental questions remain about whether these massive investments will truly lift productivity and, in turn, sustain those margins over time. Three years after ChatGPT sparked the generative AI frenzy, the magic is still more promise than payoff, a reality that validates investors’ caution toward whatever new twist comes next. Still, that skepticism hasn’t dimmed the search for AI’s true use cases, which can sustain the momentum after the initial euphoria fades. Agentic AI, some hope, could be the true use. An AI personal assistant might not only schedule meetings but automatically negotiate times, book venues, and send follow-ups, all without any prompting. It’s the kind of AI that workers dread and executives dream of — it threatens jobs while lifting productivity. A plethora of such autonomous AI agents are at various stages of development. Notably, Walmart’s Sparky, which is currently an AI-powered shopping assistant that merely responds to shoppers’ queries, is on course to start taking autonomous decisions on customers’ behalf, according to analyst Robert Ohmes of Bank of America Securities. It will not rely just on successive prompts. If this works as hoped, AI will identify what consumers want, weigh choices, strike deals, and finalize transactions — all while acting autonomously through a chain of reasoning steps that mirrors human intent. To quote McKinsey & Co.’s Katharina Schumacher and Roger Roberts, “This isn’t just an evolution of e-commerce. It’s a rethinking of shopping itself in which the boundaries between platforms, services, and experiences give way to an integrated intent-driven flow.” McKinsey estimates that by 2030, the US business-to-consumer retail market alone could see up to $1 trillion in orchestrated revenue from agentic commerce, with global projections reaching as high as $3 trillion to $5 trillion. That makes this sound like the “killer app” that has so far eluded impatient investors. The new SoFi Agentic AI exchange-traded fund, tracking companies that generate at least 30% of their revenue from agentic AI, offers a glimpse of how investors are positioning themselves to capture the next wave of the technology: As the ETF, which tracks companies such as Tesla Inc., Palantir Technologies Inc., and Salesforce Inc., was launched last month, there’s little yet to glean from its performance. To some extent, this could be an exercise in betting on the “not-so-obvious” players — learning the lesson of the dot-com era, when many of the biggest names of the time eventually faded into obscurity. In June, a BofA survey found that 64% of organizations expect to pursue agentic AI initiatives this year. Most were only in the very early stages of deployment, with 53% in the exploratory and 25% at the pilot phases, and only 6% in production as of early 2025. Customer service, marketing, sales, and software development would be the first significant job functions to adopt agentic technology: Doubts remain. In BofA’s most recent Agentic AI Handbook, Brad Sills explains that investors have learned to be skeptical due to the relatively underwhelming adoption and monetization of generative AI. That leaves them cautious about forecasts of agentic revenue. Still, he argues that AI agents could be the catalyst that allows companies to make money out of AI by unlocking “sustainable, measurable, and material workforce productivity improvements.” Sills’ view echoes recent Nvidia research, which argues that small language models — rather than the large language models like ChatGPT that set off the AI frenzy three years ago — could form the backbone of the next generation of autonomous systems. That’s mainly because the massive investments in computing capacity and power generation over the last three years have increased the attraction of smaller models. They are more energy-efficient, compact, and easier to deploy, and require far less computational power. They need less capex, and companies don’t have to fork out quite so much money to buy chips from Nvidia. The old belief that “bigger is better” is giving way to a new logic: Sometimes, smaller is smarter. Both model types will have their place, but it’s becoming increasingly clear that an all-knowing model is overkill for tasks that demand narrow, specialized expertise — particularly if they can be entrusted to systems that don’t even need to be asked first. — Richard Abbey |