Skip to Content
Skip to Footer

Salesforce Ethical AI Architect on Safety Culture in the Era of Artificial Intelligence

Each innovation cycle comes with excitement — but also fear. We’re seeing it unfold right now with the rapid adoption of AI. Today, only 21% of Americans trust businesses to use AI responsibly, which means the vast majority have serious reservations regarding the technology.

While trust in new technologies can be hard-won, the auto industry offers a valuable lesson for any company’s AI roadmap: safety sells. For example, although there was initial resistance to seat belts because they were viewed as uncomfortable and restrictive, federal laws mandating the inclusion of seat belts and state laws requiring their use changed public expectations. Today, people can’t imagine buying a car without seat belts and airbags.

We see this same lesson playing out with AI-enabled auto safety features versus self-driving cars. According to a recent survey, only 47% of consumers would consider riding in and purchasing a car with self-driving technology. However, more than 80% value AI-powered safety features like radar cameras, real-time sensors, and warning systems.

For over 100 years, the automotive industry has tapped into “safety culture” as new technology comes to the forefront. So how can we in the tech industry learn from and apply this culture to build trust in the era of AI?

What is a safety culture?

Patrick Hudson, an international safety expert, began exploring the concept of a safety culture with his culture ladder in the 1980s. His research found that organizational culture and safety are inextricably linked.

Since then, many tragic accidents have identified a lack of safety culture as a root cause, including the Chernobyl meltdown (1986), the Challenger (1986) and Columbia (2003) space shuttle explosions, and the Deepwater Horizon oil rig explosion (2010). More recently, an inadequate safety culture was linked to a fatal self-driving car accident in Arizona (2018).

It’s not just physical systems like oil rigs and cars that require a strong safety culture. AI implementation carries serious safety concerns as well. We’ve seen facial recognition systems misidentify people of color and generative AI models spread disinformation and hate speech — and new risks are still evolving.

5 tenets of a safety culture

If AI is going to be successful, it needs to be safe and trustworthy. Hudson identified five aspects of a safety culture that must also underpin an ethical technology culture. 

  1. Leadership: Leadership is critical to building and maintaining a safe and ethical culture. It sets the tone for the organization and can enforce trustworthy AI through guidelines, guardrails, and incentives. 
  2. Respect: For a safety culture to succeed, all individuals within an organization must feel empowered to call out errors and raise concerns — regardless of hierarchy. At Salesforce, we provide anonymous channels where employees can raise ethical use concerns. 
  3. Mindful: Complacency leads to accidents. To build a culture of safety, everyone must be alert and ready for the unexpected. For AI, this means considering its unintended consequences and building checks and reviews into the development lifecycle to mitigate risk.  
  4. Just and Fair: Ensuring safety means establishing clear rules about what is and is not acceptable — and applying them consistently. At Salesforce, for example, our general Acceptable Use Policy and AI Policy clearly outline how our technologies are to be used (and not used). 
  5. Learning: A strong safety culture adapts and implements necessary reforms based on lessons learned. This will be particularly important in the era of AI, given the rapid pace of innovation. Last year, we created guidelines for responsible generative AI to provide our employees with updated guidance.

Building an AI safety culture 

Hudson’s tenets form a solid foundation for an AI safety culture, but given the powerful and rapidly evolving capabilities of AI, I believe we need to implement three additional principles: 

1. It Takes a Village and a Final Attestor: When AI is added to human systems, prominent AI researcher Andreas Matthias says a “responsibility gap” emerges. Who is not just legally but morally responsible when something goes wrong? Is it the end user accepting an AI recommendation, the developer who developed the model, or someone else? Safety, like ethics, is a team sport that requires everyone to be responsible for their contribution. But to bridge that responsibility gap, an accountable individual is also needed to attest that an AI system has been robustly tested and is safe. 

2. Empowering a Human at the Helm: If we want to empower a “human at the helm” to provide a secure backstop, we must provide people with the time, information, and controls to effectively oversee AI systems. This means providing reasons and citations for AI recommendations and controls that allow people to override a system when necessary. By doing this, we ensure that AI is a safe and powerful assistant — working with us rather than without us. 

3. Establishing Clear Definitions and Standards: Many people working in the AI field are familiar with concepts of accuracy, bias, and robustness. However, we do not have agreed-upon definitions or standards to clearly determine when an AI model or system is “safe enough” – which makes it exceedingly difficult to build trust or ensure safety in these systems. In the last year, we’ve seen significant AI regulatory momentum around the world, but the industry is still baby-stepping toward crafting a shared definition of safety.

Deepening trust in AI

With each innovation cycle, there is always great debate about setting safety standards and guardrails. But time and again, we’ve seen that consumers are willing to pay a bit more if they feel they can trust a product’s safety. If we want to build trust in AI, we must wholeheartedly adopt a culture that celebrates and embraces safety as central to AI development and deployment.

Learn more about Salesforce’s approach to technology and AI ethics with these articles.

Get the latest Salesforce News

Exit mobile version
%%footer%%