If 2024 was the year of large language models (LLMs), then 2025 looks like the year of AI “agents”. These are quasi-intelligent systems that harness LLMs to go beyond their usual tricks of generating plausible text or responding to prompts. The idea is that an agent can be given a high-level – possibly even vague – goal and break it down into a series of actionable steps. Once it “understands” the goal, it can devise a plan to achieve it, much as a human would.
OpenAI’s chief financial officer, Sarah Friar, recently explained it thus to the Financial Times: “It could be a researcher, a helpful assistant for everyday people, working moms like me. In 2025, we will see the first very successful agents deployed that help people in their day to day.” Or it’s like having a digital assistant “that doesn’t just respond to your instructions but is able to learn, adapt, and perhaps most importantly, take meaningful actions to solve problems on your behalf”. In other words, Miss Moneypenny on steroids.
So why are these automated Moneypennys suddenly seen as the next big thing? Could it have something to do with the fact that the tech industry has spent trillions of dollars building colossal LLMs with – as yet – no plausible return on that investment in sight? That’s not to say that LLMs are useless; for people whose work involves language they can be really helpful. And computer programmers find them very useful. But for many industries, at the moment they still look like a solution in search of a problem.
The arrival of AI agents may change that. Using LLMs as the basic building-blocks of virtual agents that can efficiently carry out many of the complex task-sequences that constitute “work” in organisations everywhere might prove irresistible. Or so the tech industry thinks. And so, of course, does McKinsey, the mega-consultancy that provides the subliminal hymn sheet from which CEOs invariably sing. Agentic AI, burbles McKinsey, “is moving from thought to action” as “AI-enabled ‘agents’ that use foundation models to execute complex, multistep workflows across a digital world” get adopted.
If that is indeed what is going to happen, then we may need to rethink our assumptions about how AI will change the world. At the moment we are mostly obsessed about what the technology will do to either individuals or humanity (or both). But if McKinsey & Co are right then the more profound longer-term impact might come through the way AI agents change corporations – which, after all, are really machines for managing complexity and turning information into decisions.
The political scientist Henry Farrell, an astute observer of these things, has sussed this possibility. LLMs, he argues, “are engines for summarising and making useful vast amounts of information”. Since information is the fuel on which large corporations run, they will adopt any technology that provides a more intelligent and contextual way of handling information – as opposed to the mere data that they currently process. So, says Farrell, corporations “will deploy LLMs in ways that seem dull and technical, except to those immediately implicated for better or worse, but that are actually important. Big organisations shape our lives! As they change, so will our lives change, in a myriad of unexciting seeming but significant ways.”
At one point in his essay, Farrell likens this “dull and technical” transformative impact of LLMs to the way the humble spreadsheet reshaped large organisations. This provoked a genteel outburst from Dan Davies, an economist and former stock analyst whose book The Unaccountability Machine was one of the nicest surprises of this year. He points out that spreadsheets “made a whole new style of working possible for the financial industry in two ways”. First, it enabled the creation of much bigger and more detailed financial models, and therefore a different way of budgeting, compiling business plans, assessing investment options, etc. And second, the technology enabled one to work iteratively. “Rather than thinking about what assumptions made the most business sense, then sitting down to project them, Excel [Microsoft’s spreadsheet product] encouraged you to just set out the forecasts, then sit around tweaking the assumptions up and down until you got an answer you could live with. Or, for that matter, an answer that your boss could live with.”
The moral of that story is clear. The spreadsheet was a revolutionary technology when it first appeared in 1978, just as ChatGPT was in 2022. But now it’s a routine, integral part of organisation life. The advent of AI “agents” built from GPT-like models looks like following a similar pattern. In turn, the organisations that have absorbed them will also evolve. And then the world may eventually rediscover that famous adage attributed to Marshall McLuhan’s colleague John Culkin: “We shape our tools and then the tools shape us.”
What I’ve been reading
Talking economics
Transcript of a fascinating interview with the remarkable economist Ha-Joon Chang, on economics, pluralism and democracy.
AI or A-nay
“The phoney comforts of AI scepticism” is a vigorous essay by Casey Newton on the two “camps” in the arguments about AI.
What Trump did next
“I have a cunning plan...” Charlie Stross’s blogpost is a sketch for a truly dystopian story about the aftermath of Trump’s inauguration.