Large Language Model (LLM) agent workflows – where collections of (mainly) LLM-backed agents act under different roles to collectively complete tasks – have received a lot of recent attention with demos showing agents automatically complete entire coding tasks or build spreadsheets. Agent workflows hint at a future where LLMs can automate entire end-to-end data workflows that require expensive human hours today. At Numbers Station, we’ve been actively working in this space for close to a decade, laying the groundwork for these agents long before they became the latest buzzword. Based on this work and our progress with enterprise customers to date, we believe that agent workflows for analytics – or as we call them, analytics agents – represent a fundamental paradigm shift in how enterprises will interact with data. In this post we lay out how we got here, the problems with business intelligence today, why we’re excited about Numbers Station’s platform for analytics agents, and what we believe the future holds.
How we got here
Despite the hype around LLM agent workflows, LLM agents are a natural progression when you look at the evolving capabilities of LLMs. When LLMs first showed emergent capabilities (pre ChatGPT), they were one-shot models that required careful prompting from a user to perform a single task. You could consider these agents, but that would be a stretch of the definition because these early LLMs weren’t very interactive.
Now, cut to the ChatGPT era when we see models that can engage and interact with humans to complete tasks. These are early LLM agents, and we saw an explosion of tools working on LLMs agent platforms (AutoGPT, LangChain and Hugging Face to name a few) where one agent performed a task for a user.
The last evolution was the realization that LLMs perform better with specialization and instead of having single agents working with a user, we can have multiple agents working together to perform tasks 1. With proper planning and coordination, these agent groups are capable of completing sophisticated tasks.
That brings us to today, where this progress has led us to introduce the concept of analytics agents. Analytics agents are hyper-specialized AI entities with defined roles within the analytics ecosystem. While users (often analysts) can be considered agents today, the true power lies in the collaborative effort of multiple agents seamlessly addressing complex analytical tasks. By breaking tasks into modular components and leveraging the specialization of individual agents, organizations can enhance efficiency and performance in their analytical endeavors.
Business intelligence outdated: All talk, no action
The first thing that comes to mind when most people think of data analytics and LLM agents is taking natural language instructions and translating them to SQL or text-to-SQL. More often than not this is done by strapping AI onto legacy business intelligence (BI) tools. The problem here is that BI is not as useful as it used to be in today’s fast paced world of AI. Users aren’t satisfied with tools that just give them answers to questions anymore, they want actionable insights, and they wanted them yesterday.
Furthermore, enterprise purchasers are inundated with “modern data stack” tools, now compounded by every BI tool rushing to integrate AI into their product. This exacerbates the situation for purchasers as they are confronted with a plethora of fragmented options. The solution is to adopt a new, unified, AI-native platform, rather than holding onto outdated tools superficially labeled with “AI” branding.
While BI will always have its place, today, more often than not, it is a cog in the wheel of progress. Answers to questions are only a part of the solution to help users act and achieve goals. Ultimately people want to get stuff done, and analytics agents make that happen.
Enter the Numbers Station platform for analytics agents
Moving past BI, it’s clear that LLM analytic agents can revolutionize data workflows. For example, consider a simple scenario where an analyst needs to do root cause detection. The current workflow may include reading a report detailing the problem, exploring and plotting data, and filing a ticket with the appropriate team suggesting a fix. Even if each tool added LLM-chat-features – i.e. a glue LLMs to existing stack approaches – the analyst would still have a high mental load of understanding the result and coordinating between each step of the solution.
Now enter a world where the entire stack is rebuilt with LLM analytics agents. There would be a report agent to analyze the report and send information to the data exploration agent. Likewise, the data agent would coordinate and send results to the ticket agent. The entire workflow could be encapsulated and, with feedback and guidance from the analyst, semi-automated with LLMs.
Since our start we have been building a versatile analytics platform atop LLMs for workflows such as this at Numbers Station. Our architecture is designed to be a general platform for building analytics agents, including ones you’ve already seen around data enrichment and conversational analytics. Numbers Station’s architecture for analytics agents represents a fundamental, AI-native platform shift that offers organizations a future-proof way forward in the era of AI.
A clean, extensible architecture
Building data stacks for analytics agents also has advantages from the engineering side. At the core, you could build the LLM agent abstraction as a chat interface – all agents can send and receive messages and act in response to messages.
This simple interface removes any monolithic control flow as the agents coordinate among themselves – often with the help of a planner agent. This makes maintenance and expansion significantly easier for engineers. It also leaves an easy door for injecting user agents to team up with LLM agents to complete tasks.
We are still actively building and better understanding this “micro agent” architecture. While LLM agents today have the potential to be brittle and difficult to debug, we believe it is a new way to support an AI-native analytics stack.
A new path forward
At Numbers Station, we saw the potential of agents early on. Our platform, born before “multi-agent analytics workflows” was a thing, is all about making this vision a reality. By teaming up users with analytics agents, we’re ushering in a new era of data-driven culture and workflows to our customers.
We’re thrilled by the strides we’ve made in developing analytics agents alongside several enterprise partners. And we are excited to share a quick look into our progress with you in the video below.
As you can see, analytics agents today still require human feedback and interaction to properly complete actions. But we believe the real moonshot goal with LLM agents of complete automation of tasks will soon be within reach. Imagine doing this with just a simple training phrase that, once complete, means the agents can act fully autonomously to complete tasks. Enterprises could literally automate away entire data workflows.
We’re excited about this future. Forget about slapping AI onto outdated tools; the future belongs to AI-native companies, who roll up their sleeves and enable enterprises to get sh*t done. At Numbers Station, we’re at the forefront, empowering organizations to unlock the true power of their data through analytics agents. Are you ready to join the future? Let’s shape the future of analytics together, one agent at a time.
- This is the same paradigm as early works in distributed/co-operative AI such as Readings in Distributed Artificial Intelligence, Open Problems in Cooperative AI and Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms. ↩︎