Workforces are overwhelmed. Employees struggle to juggle low-value tasks, leaving little time for strategic, high-impact work. Sales teams can’t pursue all leads, and service teams either leave customers on hold too long or rely on outdated phone systems that frustrate rather than help. Marketers also struggle to scale personalized communications effectively.
Autonomous AI agents offer a solution. These advanced systems can handle complex workflows, make decisions, and interact with humans naturally — without needing constant oversight. Unlike traditional chatbots, they’re equipped to tackle more sophisticated tasks in areas like customer service and business operations.
At Dreamforce 2024, Salesforce CEO Marc Benioff called autonomous AI agents the “third wave” of AI, following predictive AI (the first wave) and generative AI (the second wave).
“This is what AI was meant to be,” he said while announcing the company’s new Agentforce suite of autonomous AI agents for augmenting employees and handling tasks in service, sales, marketing, and commerce.
To dive deeper on AI agents, we spoke with IDC General Manager and Group VP Ritu Jyoti, an expert on AI’s rise and its impact on businesses. Here’s her take on this technology.
Q. Autonomous AI Agents have arrived. How will they benefit companies?
Three benefits come to mind. First, they offer significant improvements in efficiency and productivity by streamlining workflows and automating repetitive tasks, allowing employees to focus on more strategic and creative aspects of their work. We have been talking about the benefits of AI for a long time. But this whole idea of AI acting autonomously to augment work takes everything to a whole new level. Second, by automating tasks and processes, these agents can really help businesses reduce labor and operational expenses. Knowledge workers no longer have to do those same old, monotonous, repetitive tasks. Their decision-making is also enhanced because these agents can analyze vast amounts of data much more rapidly and provide insights that lead to better decisions – at scale. Finally, agents help with competitive advantage. Companies that adopt agents can stay ahead of the curve by using them to optimize operations and deliver better customer and employee experiences.
Companies that adopt agents can stay ahead of the curve by using them to optimize operations and deliver better customer and employee experiences.
Ritu Jyoti, IDC Analyst
Q. What needs to happen for autonomous AI agents to be widely adopted?
There are a lot of steps that need to occur for agents to achieve mass adoption, including organizations getting their data in order to power AI effectively as well as technology readiness. We are making significant advancements, and the technology is getting better day by day. But we have to make sure that it is mature, reliable, and secure before fully deploying it. Responsible AI adoption will also be needed for people to fully trust it. Cost is another issue. Today, Gen AI is very costly to train and will need to be reduced. In addition, workforce transformation will be needed for these tools to help offset concerns about potential AI-related job loss. And finally, there are regulatory and ethical considerations to address. Every organization is going to need a responsible AI framework. All of these things must come together for agents to achieve mass adoption. In my opinion, this will be at least a five-year journey. But that journey has begun.
Q. When talking about Gen AI, most execs think of chatbots. What should they know about the sophistication difference between bots and emerging AI agents?
There is a lot of confusion in the market about this right now, so it is helpful to understand the difference between the two. Chatbots are are not trained on large datasets. Their decision trees are based on sets of predetermined rules and questions that help guide conversations between users and a chatbots. Fully autonomous AI agents, on the other hand, can independently take action. They can adapt and go beyond simple conversations. They can reason and ground their actions in relevant knowledge. And they can engage in multi-agent collaboration where overarching tasks are broken down into pieces that are delegated to individual specialist agents, which is similar to the way teams of human beings work.
Keep in mind, generative AI chatbots or assistants don’t necessarily go away. They will continue to be useful for simple customer interactions. But for more complex scenarios, fully autonomous AI agents will be the better choice.
Q. You’ve talked about AI agents maturing from keeping humans in the loop to having them on the loop. What do you mean by that?
The concept of moving from in the loop to on the loop in AI refers to the evolving role of humans in the oversight and interaction. Think about a very simple example: With in the loop, the human is very actively involved in the decision-making process. They provide inputs, they review the outputs, and they make final decisions. So think about an AI copilot. You ask it a question and you’re in the loop because it is giving you recommendations. You have control over its actions, which is crucial for tasks requiring high levels of judgment. Conversely, with on the loop, we take a backseat and empower the AI agents to do the work. We sit there and monitor. We do the checks and balances. On the loop provides minimal levels of human oversight. The AI handles routine and complex tasks independently, freeing the human to focus on more strategic activities.
Q. What will it take for humans to trust the autonomous AI agents working beside them?
I wish there was a secret sauce, but there isn’t. I think it will happen gradually, and there will be two aspects driving it.
First, tech vendors must give customers tools and technologies that provide transparency. By this I mean mechanisms for knowing how AI decisions were made and proof that the AI is operating within the context of a company’s ethical guidelines. So for example, if there is a company that cares about the European Union AI Act, their legal and compliance people will want to know that the AI isn’t violating those regulations or the company’s ethical principles and policies.
The second aspect has to do with the end users themselves. They need to know there’s a level of human oversight involved with AI agents to feel comfortable with them. There also has to be continuous improvement in the policies and principles that organizations implement to ensure employees the AI is being used responsibly. Two years back, that wasn’t the case. Clients would tell me responsible AI was more of an afterthought for them. Now, it’s different. Organizations are becoming more proactive. Everyone I speak to mentions their AI Governance Committee and how they’re implementing responsible AI frameworks and policies. I believe organizations and workers will increasingly trust and hand over more decision-making authority to AI agents as these kinds of issues are addressed.
Q. At some point, AI agents will need to collaborate with one another to get work done. What will that look like?
I’m dreaming of that day so I can get all my work done.
The first thing is, we’ll need structure and coordination around these agents. Take software development lifecycles as an example. There would have to be a division of tasks, with complex tasks being broken down into smaller, more manageable tasks. Each agent would then be assigned a specific role, like product manager, software engineer, or QA engineer. Then a platform would organize the agents to streamline their interactions and ensure they are executing their individual tasks efficiently. There would also need to be some dialog-based interaction between them to share information, provide feedback, and refine their solutions. It’s going to be an iterative process.
You’ll initially have a couple of agents collaborating with one another to get work done. Longer-term, it will become more as the quality of agent platforms and solutions grow.