The term “agentic AI,” or “artificial intelligence agents,” is rapidly becoming commonplace, so much so that those invested in the technology see a need to draw distinctions.
In a series of blog posts published last week, partners at venture capital firm Menlo Ventures, (which has bankrolled startups in artificial intelligence such as Anthropic), define “the next wave of agents” and how they surpass the agents introduced so far.
Tomorrow’s agents, they write, have four distinct capabilities.
Also: Networks of collaborative AI agents will transform how we work, says this expert
“Fully autonomous agents are defined by four elements that, in combination, ladder up to full agentic capability: reasoning, external memory, execution, and planning,” write the authors.
“To be clear, the fully autonomous agents of tomorrow might possess all four building blocks, but today’s LLM apps and agents do not,” they declare.
The authors, Tim Tully, Joff Redfern, Deedy Das, and Derek Xiao, explore in their first blog post what it means for something to be “agentic.” The software, they write, must ultimately gain greater and greater autonomy in selecting between possible steps to take to solve a problem.
Also: Bank of America survey predicts massive AI lift to corporate profits
“Agents emerge when you place the LLM in the control flow of your application and let it dynamically decide which actions to take, which tools to use, and how to interpret and respond to inputs,” the authors write.
A conventional large language model can have access to “tools,” such as external programs that let the LLM perform a task. Anthropic has already done this with its Tool Use feature, and OpenAI has something similar.
However, the authors explain that invoking a tool merely gives an LLM means to solve a problem, not the control to decide the way a problem should be solved.
Also: 98% of small firms are using AI tools to ‘punch above their weight’
As the authors write, “Tool use is powerful, but by itself, [it] cannot be considered ‘agentic.’ The logical control flows remain pre-defined by the application.” Rather, the agent must have a broad ability to choose which tool will be used, a decision logic.
A few versions of software come closer to being true agents, the authors explain. One is a “decisioning agent,” which uses the large language model to pick from among a suite of rules that in turn decide which tool should be used. They cite healthcare software startup Anterior as an example of such a decisioning system.