Business leaders today are looking to AI to unlock new innovations and gain a strategic advantage. Bridging the trust gap between this cutting-edge technology and the people implementing it is an important piece of the change management puzzle. A recent AI study showed more than half of workers don’t trust the data used to train AI, and 82% of workers recognize that secure data is critical to building trust in AI.
We sat down with Salesforce’s President & Chief Legal Officer, Sabastian Niles, to learn more about trusted data and how Salesforce is serving its customers ethically.
Q. Why is it important to trust AI?
The AI revolution is a trust revolution. AI, including generative AI and AI agents, is one of the most transformative technologies of our time — on the scale of mobile and the internet. It has the potential to drive organizational success, elevate creativity, amplify productivity, reshape industries, and enhance the human experience. When done right, it’s about amplifying our potential, increasing efficiencies, and accelerating the velocity of wise decision-making.
AI can now help us do everything from writing emails to hailing a rideshare. And the platforms we are building for customers and partners expand AI use cases even further. But the transformative power of AI extends far beyond efficiency and convenience gains.
However, in order to have the AI future we want, we must prioritize trust today. This means establishing a trust-first culture and the right regulations and public policies that balance innovation and safety.
Sabastian Niles, President & Chief Legal Officer, Salesforce
Amid an evolving regulatory landscape, it may seem like AI is the Wild West — and some organizations are treating it that way. However, in order to have the AI future we want, we must prioritize trust and impact today. This means accelerating responsible innovation, establishing a trust-first culture, and having the right frameworks, regulations, and public and internal policies that drive innovation and safety.
Q. What are the major hurdles businesses face with trusting AI?
Trusting AI starts with trusting the source, which is the data. First and foremost, we need to ensure the quality, privacy, and protection of customer data. But it doesn’t stop there. Ensuring data accuracy is essential. We must also eliminate bias and break down data silos.
This is where the nuance between enterprise and consumer AI applications is crucial. Enterprise AI systems rely most on curated data, with customer-specific use cases deployed in more controlled environments, limiting the risk of hallucinations and increasing accuracy. Meanwhile, consumer AI data can come from a broad range of unverified sources, which can reinforce concerns around toxicities, bias, and accuracy.
You can have the fastest and smartest car on the market, but if no one trusts it, no one will drive it.
Trust in AI must also be inclusive and accessible. To achieve this, it’s critical to have representation across stakeholder groups and geographies to understand the unique needs of diverse industries and user groups, respecting business life cycles and cultural nuances, and dismantling biases. We must also embrace the upskilling, reskilling, and talent expansion potential ahead of us.
Q. What legal considerations should be top of mind for companies deploying AI solutions?
When companies choose to adopt AI solutions for their organizations, there are three areas that should stand out as key priorities.
First, trust and ethical use. Companies should have ethical guidelines from the start of development and consider all stakeholders — including customers, partners, and employees — at every stage.
Second, data governance and the privacy of customers. This involves strict compliance with data protection regulations, obtaining proper customer consent on their items, and implementing strong security measures to protect data used in AI systems.
And finally, accountability and transparency. Companies should be prepared to explain how their AI systems make decisions. This is crucial not only for regulatory compliance, but it also directly correlates to building trust with customers and stakeholders. Providing clear, understandable explanations of AI decision-making processes is becoming increasingly important, especially in regulated industries.
Salesforce, for example, addresses these legal considerations through the AI Trust Layer, Office of Ethical and Humane Use, and clear AI use guidelines. By prioritizing these legal and ethical considerations, companies can harness the power of AI while mitigating risk and building trust with their stakeholders.
Q. What are the primary ethical concerns around AI utilization?
Well, we have to navigate the tradeoffs between technological capabilities and ethical responsibilities. Enhancing efficiency through AI should never come at the cost of compromising customer trust or personal information. There’s also a critical need for governance and ethical guidelines. Business leaders can take charge by championing AI governance councils and ethical usage policies, steering organizations toward trust and accountability while driving innovation.
Business leaders can take charge by championing AI governance councils and ethical usage policies, steering organizations toward trust and accountability.
Sabastian Niles, President & Chief Legal Officer, Salesforce
Lastly, we must consider the broader impact of AI — including its environmental footprint. At Salesforce, we’re developing energy-efficient and cost-effective AI solutions and ensuring access to AI benefits across society.
Q. What are the current regulatory frameworks governing the use of AI and how does Salesforce integrate this into its processes?
AI regulation continues to evolve. For global enterprises, the EU AI Act, for example, is the first comprehensive legal framework of its kind for AI governance, and sets the stage for responsible AI adoption, with a goal of fostering trust and transparency in the technology everywhere.
AI regulation should not be one-size-fits-all. It should differentiate based on context, control, and use, while emphasizing transparency and compliance with existing laws. For example, a chatbot providing recipes and cooking tips should not be required to follow the same regulatory requirements as an AI application being used to determine a patient’s diagnosis and medical care plan.
That said, privacy laws are foundational to responsible AI regulation, and there’s a pressing need for a commonsense federal privacy law in the U.S. to govern relevant data that powers AI. Salesforce integrates these legal considerations into our processes by anticipating where public policy is heading and aligning our practices with expected baseline requirements.
Q. Our discussion has mostly focused on how to prevent what could go wrong with AI, but there are also many unprecedented benefits that come with this technology. What are you most excited about in this AI revolution?
If we get the trust piece right, there’s a lot to be excited about. We’re seeing a glimpse of how this powerful technology can provide life-changing — possibly even humanity-changing — benefits. It’s not just about doing things differently; it’s about reimagining the very essence of how humans and businesses operate.
It’s not just about doing things differently; it’s about reimagining the very essence of how humans and businesses operate.
Sabastian Niles, President & Chief Legal Officer, Salesforce
Take AI agents, for example. We all know how frustrating it can be to interact with chatbots that have a finite script to answer customer questions. But with the advent of AI agents, the customer experience is reaching a new level. I could not be more excited about how we are designing AI agents at the technical and platform level and embracing the opportunity for seamless collaboration between humans and AI. As a business executive, it’s exhilarating to see how we’re at the precipice of a fundamental shift in how businesses interact with customers and elevate their employees. Making better decisions, faster and with more integrated data across previously disconnected silos, can only improve our judgment and drive better outcomes with AI.
As a consumer myself, I’m thrilled at the idea of a more seamless and faster way to have my questions answered about the birthday gift I just ordered for my daughter. And that’s barely scratching the surface.
The opportunities for government and the public sector to deploy AI to improve constituent services are also enormous — imagine how much more seamless, and dare I say, pleasant, of an experience it could be to renew your driver’s license or apply for a passport. And beyond that, AI will revolutionize everything from accessible healthcare diagnostics and financial planning to manufacturing and sustainability measures. The seismic shift has only just begun.
More information:
- Watch the full interview from World Tour D.C. on Salesforce+
- Learn more about the Einstein Trust Layer and how it can support your business
- Learn more about the Einstein 1 Platform and why it’s integral to creating an AI enterprise