The recent launch of Agentforce marks a pivotal moment in orienting Salesforce and our customers’ businesses toward an AI-empowered future. In this emerging landscape, augmented by a network of AI agents, the role of humans at work becomes more empowered, interesting, and creative than ever before. We have now reached the Third Wave of AI, which builds on the power of predictive and generative AI. From talent recruitment to supercharged healthcare, we’ll now see AI work with humans across sectors to fill a variety of needs at scale—faster and in many cases more accurately than humans alone ever could. Agentic AI will take some getting used to, but will improve many aspects of our work: productivity, efficiency, strategic decision-making, and — in my firm belief—overall job satisfaction.
Welcome to the dawn of the Agentic AI Era. Starting now, almost any business—from individual contributors to executives—can orchestrate not just human workforces, but digital labor as well. We’ll see trust and accountability as the bedrock for an evolution unfolding in three stages: specialized agents mastering discrete tasks, multi-agent systems collaborating seamlessly, and enterprise-level orchestration rewriting how businesses operate.
Salesforce’s AI Research’s role is to shape the future of enterprise AI. Here’s our vision for how agentic systems will advance, and what will be needed from humans to help them along the way.
The Evolution of AI Agents: From Rules to Reasoning
A large language model is a trained, deep-learning model that understands text and is able to generate text. It conditions on the previously generated words to The progression of AI agents mirrors the development of machine learning itself. Traditional rule-based systems like Robotic Process Automation (RPAs) were capable of executing precise sequences but stumbled when faced with variations. These early implementations required substantial technical overhead and consulting services, creating a high barrier to entry for many organizations.
The last few decades have witnessed incremental and breakthrough advances that have transformed how machines process information—evolving from rigid automation to more flexible, adaptive, and far more efficient learning systems.Agents built with modern platforms like Agentforce can understand context, adapt to new situations, and handle broad task spectrums. But as I’ve written about before, what’s even more exciting is where we’re headed: self-adaptive agents enabled by multi-agent reasoning—agents that can learn from their environment, improve through experience, and collaborate both with humans and agents from our enterprise customers, partners, vendors, and even the personalized AI assistants of consumers, which are becoming a bigger part of their lives every day. We are only at the beginning of a three-stage future for Enterprise AI agents.
Three Stages of Enterprise AI Agents
Just as music evolved from single-note melodies to complex symphonies, AI agents are progressing from solo performers to orchestrated ensembles. Each stage builds upon the last, creating richer, more nuanced interactions in the enterprise environment.
Stage 1: “Monophonic” AI – The Specialized Contributor
In the first stage of agentic evolution, specialized agents excel at defined tasks within particular industries, bringing unprecedented efficiency and accuracy to routine but crucial business operations. They represent the foundation of enterprise AI adoption, handling discrete tasks with a level of consistency and speed that transforms departmental workflows. They also are masterful at providing the benefits of AI’s advancements to date, like predictive next best actions and product recommendations, highly personalized to each customers’ preferences and behaviors. And generative guidance, marketing language and correspondence of the highest caliber, for customers, service and sales reps—humans and bots alike.
In commerce, for example, they revolutionize inventory and account management. Indeed, agents don’t just handle basic inventory checks; they proactively monitor stock levels across multiple locations, predict seasonal demands, and generate real-time account summaries that flag unusual patterns or opportunities. Tasks that once required hours of human analysis can now be completed in seconds, with greater accuracy and depth, yielding optimized, personalized, and almost “magical” experiences for the retail customer.
Service operations see similar transformations. Beyond basic billing summarization, these agents analyze customer interaction patterns, automatically categorize, and prioritize service requests, and generate predictive insights about customer needs. They spot trends in customer behavior that might indicate satisfaction issues or expansion opportunities, providing service teams with actionable intelligence rather than raw data. The result is customer service that feels effortless, ambient and almost invisible to the end customer – their issue is now often resolved before they even knew there was one.
In financial services, meantime, agents redefine customer service efficiency. When processing dispute acknowledgments, they analyze transaction histories, identify patterns of potentially fraudulent activity, and automatically trigger relevant security protocols. For financial planning, they generate comprehensive analyses by correlating market data, individual client histories, and broad economic indicators. When used correctly, these agents will afford businesses unprecedented back-office efficiency, and will inform next-generation retail banking, investment guidance and wealth management experiences for consumers.
Stage 2: “Polyphonic” AI – The Seamless Collaborators
This stage introduces orchestrated collaboration between specialized agents within the same company, collaborating together toward a common business goal. In this case, an “orchestrator agent” coordinates multiple specialists working in concert, similar to how a restaurant’s general manager orchestrates talented hosts, servers, managers, chefs, prep cooks, and expediters to work together to earn that coveted Michelin star.
What does polyphonic AI look like for a complex business operation? Consider a customer service scenario where multiple agents work invisibly together to support a loyal retail customer’s request ticket to exchange sizes of an off-season SKU.
- A front-line service agent processes the initial customer inquiry
- An inventory specialist checks product availability across locations
- A logistics agent calculates shipping options and timelines
- A billing expert reviews account history and payment options, and most importantly:
The orchestrator agent coordinates all these inputs into a coherent, effective, on-brand and contextually relevant response for the human at the helm to review, refine, and share with the customer.
When implemented well, this multi-agent approach, with an “orchestrator agent” serving its “orchestrator human” delivers powerful AI-driven advantages: The system achieves enhanced reliability by leveraging specialized, trusted agents focused on specific domains, while reducing hallucinations since each agent operates within a narrower scope. This distributed approach also strengthens security by isolating sensitive data handling to specific agents. Perhaps most importantly, the ecosystem offers seamless scalability—organizations can continuously add new specialized agents to expand capabilities as needs evolve.
Stage 3: “Ensemble” AI – The Enterprise Orchestrators
The final stage—the ideal stage—adds sophisticated agent-to-agent (A2A) interactions across organizational boundaries, creating entirely new patterns of business relationships. Beyond traditional B2B and B2C models, we see the emergence of B2A (business-to-agent) and even B2A2C interactions where AI agents serve as intermediaries for work and transactions.
Consider a simple car rental scenario: A customer’s personal AI agent negotiates with a rental company’s business AI agents. The customer’s agent optimizes for the best price and value, while the rental company’s agent aims to maximize revenue through add-on services. But the business agent must balance aggressive sales tactics against the risk of losing the deal to competitors. These interactions can be governed by sophisticated “game theory” principles, requiring advanced negotiation skills and protocols, risk management under uncertainty, verification mechanisms to ensure trust along the way, not to mention the ability to deftly resolve conflict.
Now, imagine this scaling to ever-more complex enterprise processes we see across industries: from supply chain optimization to customer experience orchestration. Whether you’re a consumer or enterprise employee, Ensemble AI will mean that you’ll have an assistant to perform complex orchestration and meaningful collaboration per your personalized needs and wishes. And in order to achieve this, we as humans have some work ahead of us.
Non-Negotiable Imperatives: Trust and Accountability
As we deploy increasingly sophisticated agent systems, two fundamental principles must guide every decision: trust and accountability.
Building Trust
Trust in the era of agentic AI extends far beyond technical safeguards against toxicity, bias, and hallucinations. Recent Salesforce research shows 61% of customers believe AI developments make trustworthiness more critical than ever—and they’re right. We’re entering territory that demands deep organizational confidence in the symbiotic relationship between humans and AI.
This confidence builds on four essential foundations.
First is the bedrock of accuracy and boundaries — AI agents must operate within well-defined parameters while maintaining precision. Beyond preventing errors, these guardrails will create predictable, trusted partnerships that amplify collective intelligence.
Just as crucial is an agent’s self-awareness. Like any valued colleague, AI agents must acknowledge their limitations and know when to engage human expertise. This requires sophisticated handoff protocols that ensure seamless collaboration between artificial and human intelligence. For example, our AI Research team explores training methods to teach AI agents to flag areas of uncertainty and seek assistance when confronted with unrecognized challenges. Trained correctly, AI will know when not to attempt a guess but rather to come to a human and ask for help.
For multi-agent systems, we will also need engagement protocols that are globally accepted and adopted. Think of it like this: Banks have global protocols, or rules to systematize the transfer of funds between individuals, businesses and countries. Traffic has protocols to ensure adherence to rules, governed by our universal traffic light color system. The Internet has “IP” – our global Internet Protocol that allows for routing and addressing of packets of data to travel across networks and arrive at the correct destination.
So too will agents of the future need these protocols that are agreed upon and implemented universally, so that orchestrator agents can communicate, negotiate and collaborate with other business’s agents safely, ethically and for mutual benefits of both parties. This “ensemble” level of engagement will need to be fast, efficient and fair. Without such protocols in place, we’re at risk of agent-to-agent “spam” at best, and fraud and other dangers at worst.
Finally, as our AI agent workforce grows, so must our security measures. As with any technology, humans with malicious intentions can also wield AI, designing and training “AI Worms” for the purposes of data breaches or to attempt to hijack other AI agents to disclose private customer data. Enhanced protection, privacy controls, and continuous monitoring mustn’t be seen as mere technical requirements—they’re essential to maintaining the trust that transforms AI from a tool we use into a partner our businesses will grow with.
Ensuring Accountability
As organizations deploy AI agents that make thousands of decisions per second, we must establish clear frameworks for responsibility and oversight to ensure we have a plan for if and when things go wrong. This requires a comprehensive approach. Below is a starting point for C-Suite teams overseeing an agent implementation effort.
- Clear chains of responsibility for agent decisions. When an AI agent makes a consequential decision, there should be no ambiguity about who’s accountable. This may even mean establishing new roles like “AI Operations Officers” who have both the authority to oversee agent deployments and the responsibility when issues arise.
- Robust systems for detecting and correcting incomplete information, biases, hallucinations, or toxic outputs—before they impact your business. This goes beyond basic safety checks to include continuous monitoring of agent decisions, real-time intervention capabilities, and systematic audit trails. Just one example of this is our research team’s recent advancements in retrieval-augmented generation (RAG), dramatically improving how our AI systems access and verify information. These innovations enable rapid evaluation and course-correction—ensuring that AI systems deliver accurate, reliable results that humans and businesses can trust.
- Defined processes for human oversight and intervention that balance autonomy with control. We need to move past the simple notion of “human in the loop” to develop sophisticated frameworks for when and how humans should intervene in agent decisions. As my colleague Paula Goldman says, it’s more about “human-at-the-helm.” This means creating guidelines and org-wide standard ways of communicating with agents as well as clear escalation pathways that maximize agent autonomy for routine tasks while keeping human judgment central to high-stakes decisions.
- Structured approaches for making things right when mistakes occur. This includes not just technical rollback procedures, but also clear protocols for customer communication, remediation, and systematic improvements to prevent similar issues.
- New legal and compliance frameworks that explicitly address AI agent accountability. The current regulatory landscape wasn’t designed for autonomous AI agents making business decisions. We need to work proactively with regulators to develop appropriate governance structures.
Looking Ahead: The Scientific Method Meets Enterprise Innovation
The path to deploying truly interactive AI systems demands executive foresight: we must apply the same stringent scientific standards that produced these advances to their real-world implementation. Success won’t be just decided by the number of AI agents deployed or implementation speed, but by how thoughtfully enterprise leaders and technologists orchestrate their integration with existing workforce protocols, processes, and preferences.
As we advance our understanding of agent collaboration, shared learning, and human-AI interaction, we’re discovering principles backed by reproducible research and empirical evidence. Drawing on Salesforce’s decades of enterprise CRM success and expertise in business logic and optimization, we’ve infused Agentforce with deployment strategies that ensure our systems are not just powerful, but trustworthy and accountable in meeting the needs of our customers’ business—and the humans that run them. The future isn’t about humans versus AI – it’s about humans with AI working in concert, each using their unique strengths. Agents will become—and with the launch of Agentforce, indeed already are— a true workforce multiplier, enabling teams to tackle previously impossible tasks. The time to begin this transformation is now, and the scientific method will light our way forward: through careful hypothesis testing, meticulous measurement, and continuous refinement based on evidence. Just as every breakthrough experiment begins with a hypothesis, every successful AI transformation begins with a vision—and ends with validated truth.
Resources
- Salesforce AI Research website
- Follow us on X: @SFResearch