Skip to Content

Are You Ready For An AI Audit?

inspector doing an AI audit
An AI audit is a check-up on your AI systems to ensure they’re operating ethically, transparently, and within regulatory guidelines. [Studio Science]

Regulators, academics and politicians are calling for independent AI audits as a way to protect the public from AI’s potential harms. How to prepare.

Is your business ready for an audit? No, not the kind involving accountants and IRS agents. We’re talking about an AI audit, a kind of check-up on your AI systems to ensure they’re operating ethically, transparently, and within regulatory guidelines, with an eye toward understanding how they make decisions. 

As generative artificial intelligence (AI) has proliferated, ethics and transparency have quickly emerged as key considerations among regulators, consumers, academics, and even private companies with skin in the game. Salesforce, for example, supports a strong but nuanced approach to AI regulation. Both the White House and members of Congress have called for independent AI audits as a way to protect the public from AI’s potential harms. But there does not exist — yet — any national standard or baseline for doing so. 

In the absence of national regulation, individual states have stepped up. According to the National Conference of State Legislatures, 40 states have proposed or enacted dozens of bills focused on regulating the design, development and use of AI. The New York State Legislature has proposed two bills that would force employers to conduct bias audits if they use AI for hiring, giving applicants the right to sue not only the employer but the tech companies creating the AI products. 

At the same time, the field of professional AI auditing has emerged. In late 2023, a new group, the International Association of Algorithmic Auditors (IAAA), was formed to create an AI auditor code of conduct, along with training curricula and a certification program for AI auditors. 

According to the Federation of American Scientists, “an algorithmic audit examines automated decision-making systems to ensure they’re fair, transparent, and accountable.” The audit looks at data inputs, the decision-making process, model training, and the outcomes, to identify biases or errors. These audits can be done by an independent third party or a dedicated internal team. 

By scrutinizing the inner workings of AI systems, organizations can proactively identify and address potential vulnerabilities. 

This scrutiny “will become pretty dominant within the next year,” said William Dressler, regional vice president of AI and data architecture, and the head of innovation in the global AI practice at Salesforce. “Having those [safety] mechanisms in place now is going to be a no-brainer, just like we all have antivirus software on our computers.” 

Now that you understand the landscape around AI audits, you’re probably wondering what safeguards you can put in place to ensure your AI systems are safe, ethical, transparent, and trustworthy.

Conduct an audit trail

You can track the use of generative AI in your Salesforce organization to ensure that usage complies with your security, privacy, regulatory, and AI governance policies. Generative AI audit data includes information about the Einstein Trust Layer, a series of safeguards designed to protect your data and your organization from potential harm.

Some of these include: data masking, which replaces sensitive data with anonymized data; toxicity detection, which flags toxic content like hate speech; zero retention which makes sure no proprietary data is stored within an app or its supporting platforms; and secure data retrieval, which allows users to securely access the data to ground generative AI prompts, while maintaining permissions and data access controls.

Salesforce’s Chief Product Officer, David Schmaier discusses the AI revolution, and the importance of trusted data

Next, let’s dive deeper into AI audits.  

What are the elements of an AI audit?

An AI audit involves stakeholders across the organization, including senior leaders, legal, developers, security, compliance, and AI practitioners. 

Some of the key aspects of an AI audit include: 

Security and privacy: This examines the security measures a company uses to protect its AI systems from outside threats, including that its data is managed in a way that protects privacy. 

Ethical considerations: This analyzes AI systems to identify and mitigate biases that may result in unfair or discriminatory outcomes, including assessing the impact of AI systems on different demographic groups and society at large. 

Transparency and explainability: This assesses AI systems to understand how transparent they are in their decision-making processes and inner workings. It may also include an explainability assessment, algorithm analysis, data transparency review, and an examination of the AI model. This all helps explain why AI does what it does.  

Accuracy: An accuracy assessment evaluates the performance, reliability, and consistency of the AI model’s predictions or decisions. This may include error and accuracy analysis, and validation, where you’d compare the AI system’s outputs against what you know is true. 

Compliance: This determines whether you’re following legal, industry, and internal guidelines and regulations. 

Audit AI for trusted AI

As Paula Goldman, chief ethical and humane use officer at Salesforce, noted, “It’s not enough to deliver the technological capabilities of generative AI. We must prioritize responsible innovation to help guide how this transformative technology can and should be used — and ensure that employees, partners, and customers have the tools they need to develop and use these technologies safely, accurately, and ethically.”

To get there, Salesforce has developed five guidelines for the development of trusted generative AI – covering safety, accuracy, sustainability, honesty, and empowerment – that can serve as a guidepost for your own AI practice. 

Get the latest articles in your inbox.