Skip to Content
Skip to Footer

AI Agents Are a Door to Economic Growth; Policymakers Hold the Key

Woman at laptop reads new Salesforce whitepaper about AI agents

A new Salesforce white paper outlines key considerations for designing and using AI agents, and how global policymakers can adopt and unlock AI’s full potential


Agentic AI is here — and systems like Salesforce’s Agentforce prove to be a powerful tool for economic growth and empowering human workers. In fact, for customers like Wiley, Agentforce helped increase case resolutions by 40%, outperforming its previous chatbot and freeing up employees to focus on more complex cases. But the solution to increasing productivity and building trust is not as simple as implementing AI agents immediately, according to a new Salesforce white paper. 

For autonomous enterprise AI agents to be accepted in government and industry workspaces, they must operate within defined guardrails that ensure a smooth transfer of tasks to humans, be grounded in trusted enterprise data, and adhere to the highest standards of data privacy, security, and accuracy.

AI itself has the potential to build trust, efficiency, and effectiveness in our institutions, with Salesforce research showing 90% of constituents surveyed are open to using AI agents for government interactions, attracted by benefits like 24/7 access, faster responses, and simplified processes,” said Eric Loeb, EVP of Global Government Affairs at Salesforce.

Key considerations for policymakers in an agent-first era

To balance the risks and opportunities of AI agents, the white paper lays out key design considerations for policymakers to keep in mind, including: 

  1. Humans working with agentic AI: Employees will need new skills to configure, task, manage, and oversee AI agents. Agents will need to be easy to program and use in a variety of contexts.  
  2. Reliability: AI agents must be carefully designed and equipped with guardrails to ensure the clear and smooth handoff of projects and tasks to humans, as well as to minimize, flag, and correct hallucinations. Careful engineering and robust testing are required to ensure the accuracy and reliability of agents.
  3. Fluency across domains: AI agents will interact with users and third parties inside and outside of organizations, retrieving, interpreting, and acting on different types of information across these domains. This requires advanced programming and thoughtful integration of business processes and data systems.
  4. Transparency and explainability: Users need to know when they are interacting with an AI agent instead of a human. Regulators and the public will also want to know how AI agents went about doing their jobs to ensure they did so accurately and in a trustworthy way. Systems ensuring full transparency and explainability, therefore, will be required for any agentic AI deployment.
  5. Accountability: To ensure accountability, it is important to clearly define who’s responsible for ensuring the agent functions properly and delivers trusted outputs.
  6. Data governance and privacy: AI agents may require access to personal or other sensitive data to complete their assigned tasks. For users and enterprises to trust AI agents, they will have to operate with high standards of privacy and data security.
  7. Security: Like other AI applications, agents may be vulnerable to adversarial attacks, where malicious inputs are designed to deceive the AI into producing bad outputs. As AI agents take on increasingly complex tasks, adhering to best practices for AI safety and quality control will be essential.
  8. Ethics: Companies that use AI agents should establish and follow ethical use guidelines. This will require developing new protocols and norms for autonomous AI systems, fostering effective human-AI collaboration while building consensus and confidence in decision-making processes.
  9. Agent-to-agent interactions: Common protocols and standards will be important to instill trust and help ensure controlled, predictable, and accountable agentic behavior. Fundamental to this is a secure information exchange environment and, when relevant, audit trails of agent-to-agent interactions.

Policies to foster an agent-ready ecosystem

Although AI agents are the latest technology breakthrough, the fundamental principles of sound AI public policy that protects people and fosters innovation remain unchanged: risk-based approaches, with clear delineation of the different roles in the ecosystem, supported by robust privacy, transparency, and safety guardrails. 

As policymakers look to a future of wide adoption of trusted agentic AI across industries and geographies, it’s time to think beyond regulating how AI is built. They must also equip the workforce with the necessary skills to harness the potential of AI agents.

“It’s no longer a question of whether AI agents should be integrated into workforces – but how best to optimize human and digital labor working together to reach desired goals,” said Loeb. “Governments must adopt policies and procedures that pave the way for trusted, responsible agentic rollouts that lead to more meaningful and productive work.”

Read more:

Astro

Get the latest Salesforce News