Skip to Content
Skip to Footer

Autonomous AI Agents Are Coming: Why Trust and Training Hold the Keys to Their Success

Autonomous AI agents are coming to workplaces sooner than you might think. 

Count on them automating routine and complex tasks, such as personalized service, support, recommendations, and resolutions for customers based on detailed analyses of their purchase histories, preferences, and questions or concerns. Look forward to them sifting through company historical records and public information to find and line up top sales prospects in a region, and then crafting talking points the rep can use when getting in a room with the customer. And, expect them to recognize significant market opportunities and come up with creative marketing campaigns for reaching specific demographics at particular points in time. 

All without human intervention and executing tasks in milliseconds.

The possibilities for these fully autonomous AI agents — an advanced form of generative AI that can continuously improve their own performance through self-learning — are really endless. But, before humans start handing over important tasks to these technological marvels, organizations will need to get their employees ready.

But, before humans start handing over important tasks to these technological marvels, organizations will need to get their employees ready.

That’s because, as with any new hire, autonomous AI agents won’t be immediately accepted. They’ll have to prove themselves before being allowed to handle truly important jobs. Moreover, they’ll need to gain the trust of human beings who haven’t worked with the technology. 

Today, just 7% of desk workers today consider AI results trustworthy enough for job-related tasks, according to a Slack survey. But a new Salesforce study found 77% of global workers are optimistic they’ll eventually trust it, and 63% believe human involvement will be key to building trust in AI — which is why Salesforce is putting considerable energy into building trust and training around autonomous AI.

“Humans adopt technologies that are generally valuable to them, but it rarely happens overnight,” said Jayesh Govindarajan, SVP of Salesforce AI. “For autonomous AI to take root in the enterprise, businesses and employees will need to overcome the trust gap and go through considerable training to effectively understand, manage, and make the most of this important technology.”

“Just as we saw with digital transformation, AI transformation is a journey that’s different for each company,” agreed Mick Costigan, VP of Salesforce Futures and editor of the Salesforce Futures digital magazine. “Companies are at different starting points and stages with AI infrastructure, tools, and talent. Especially for those in the earlier stage, companies need to start their AI journey now, with trust and training being foundational.”

Building a trust foundation

Last February, Salesforce introduced Einstein Copilot, a conversational AI assistant for the enterprise. Natively embedded across Salesforce applications, Einstein Copilot is grounded in a company’s unique data and metadata, enabling it to answer questions, generate content, and dynamically automate actions — all in the service of providing better productivity, deeper customer relationships, and higher margins. Einstein Copilot includes a library of actions, which is a set of jobs the copilot can do. For example, if a user asks a copilot for help with writing an email, the copilot launches an action that drafts and revises the email and grounds it in relevant Salesforce data. For more complex processes, Einstein Copilot could orchestrate numerous actions to autonomously complete a task.

For example, if a customer has a problem with a product, they could start a conversation with an Einstein Service Agent that would be able to look at their purchase history and, using a company’s own knowledge articles, automatically suggest a few troubleshooting techniques. If that doesn’t work, it could ask the customer to upload a picture of the error code they’re seeing, analyze the problem, then determine if the item needs to be exchanged. The agent could proactively suggest replacements or upsell the customer to another model, offer a discount for the inconvenience, and connect the customer to a store near them to pick up the item. 

See a demo of this  scenario below.

As such technology continues to evolve, moving from generative AI to autonomous AI, Salesforce is focused on making sure that the AI is trained for use in responsible and ethical ways, and that the right balance is struck between AI autonomy and human involvement.

Those efforts begin with trusted data, meaning accurate and reliable digital information that can interact with the large language models (LLMs) behind generative AI. Autonomous AI agents need a steady diet of trustworthy data to operate efficiently and deliver accurate output. But most organizations struggle to access all of the relevant data AI needs because it’s often trapped in silos. In fact, 81% of IT leaders say data silos hinder digital transformation and implementing AI solutions.

Those efforts begin with trusted data, meaning accurate and reliable digital information that can interact with the large language models (LLMs) behind generative AI.

That’s why Salesforce developed Data Cloud, a data platform deeply integrated into Salesforce’s Einstein 1 Platform that provides a foundation for trusted AI. Data Cloud integrates, unifies, and harmonizes data from across an enterprise to power AI with the most accurate and reliable information. Data Cloud’s zero-copy data integration makes it possible to access data sitting in data lakes and data warehouses, without having to move, copy, or reformat it.

Making all of this data trustworthy for autonomous AI consumption: The Einstein Trust Layer, a secure AI architecture that performs functions like masking personally identifiable information (PII), scoring outputs for toxicity, and helping to protect information from unauthorized access and data breaches through zero-data retention from Salesforce’s LLM partners. 

Trust but verify

In addition to its work around trusted data, Salesforce is also thinking through ways to apply more human controls to these autonomous AI agents to ensure they’re doing what they’re supposed to be doing. And, to enable users to take corrective action if anything goes sideways. 

With that in mind, Salesforce is actively designing robust, system-wide controls or “patterns” to ensure that autonomous AI agents hold true to their objectives and that their activities can be inspected and overseen by humans. 

Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, said initial work is focused on five key patterns, including: 

  • Mindful Friction, where pauses are injected into AI processes at critical junctures to ensure human engagement
  • Awareness of AI to make sure people know they’re dealing with an autonomous agent
  • Bias and Toxicity Safeguards that minimize harmful or malicious AI outputs
  • Explainability and Accuracy to clearly explain what drove an AI agent’s conclusions or actions 
  • Hallucination Reduction to limit the possibility of false or misleading results 

“People will need more sophisticated and context-specific pop-ups and warnings in these AI products if they’re going to be used in business settings,” Goldman said. “That’s the kind of stuff we’re working on.”

Audit trails will also be necessary to hold autonomous AI agents accountable, just as employers might establish key performance indicators (KPIs) to keep human workers marching toward their expected goals, said Govindarajan. 

For instance, tracking how an AI service agent interacts with customers and surveying how they feel about those experiences will be essential. If common feedback suggests the AI is regularly running into specific issues, such as being too regimented about a return policy, changes might need to be made, Govindarajan said. 

In addition, he said an AI should be able to accurately report how and where it gathered data that advised its insights, recommendations, and actions. Cited sources should also stand up to scrutiny, meaning the autonomous AI agent can explain which customer data it used to advise its findings and trigger its behavior, Govindarajan adds.

The importance of training

Beyond pointing out potential problems with the technology, executives say that such audits might also point out a need for better employee training on AI systems. A 2023 Boston Consulting Group survey found that while 86% of workers believe they’ll need AI training to sharpen their skills, only 14% of frontline employees had gone through upskilling classes, compared to 44% of business leaders. 

Often, employees don’t fully understand how AI works (beyond their limited experiences with consumer AI chatbots) and don’t know how to create and refine prompts that will get quality outputs from an LLM. Moreover, there’s considerable concern about AI ‌displacing workers or becoming too powerful and working at odds with humans.

Staying ahead of competitors

Govindarajan advises organizations to invest in such training because AI workers are certainly coming, and they’ll need their workers to be prepared.

“People will need training to work with autonomous AI agents because, let’s face it, not everybody is a born manager,” he said. “You might have an army of people working for you, but if they’re all human and you suddenly have an autonomous AI agent on staff, you won’t know what to do with it immediately. You won’t know how to ask the right questions, elicit the right responses, and instruct it on how to do better. That’s where training becomes vital. AI workers need context and onboarding much like new employees. Setting direction, clear goals, and giving feedback helps AI train itself better.”

People will need training to work with autonomous AI agents because, let’s face it, not everybody is a born manager.

Jayesh Govindarajan, SVP of Salesforce AI

Any employer that wants to provide an accessible pathway for teams to overcome such trepidation and gain in-demand AI skills can tap Trailhead, Salesforce’s free online learning platform. To date, millions of people have skilled up on Trailhead, which has a catalog of AI learning so anyone, from seasoned technology veterans to those getting started, can learn about AI at their own pace. 

Costigan said companies should be thinking about AI trust and training now because autonomous AI agents will become standard for future business.

“It’s easy to get caught up in day-to-day issues and not recognize that, ultimately, AI is going to increase competitive advantage for businesses,” he said. “It’s what happened during COVID when those who were ahead in their digital transformation journey could adapt quickly. Similarly, autonomous AI is going to happen quicker than you think, so don’t waste time. Companies need to figure it out now as the impact of AI agents will be fast and transformative across industries.”

Go deeper:

Get the latest Salesforce News

Exit mobile version
%%footer%%