Artificial intelligence (AI) is having a big impact on businesses across India. From assisting with process automation and decision making to predicting customer behaviour, AI is changing the way we work.
According to a Salesforce commissioned YouGov survey in India, 96% of managers believe embracing AI is important to their organisation’s ability to survive and stay competitive.
However, only 65% of these managers currently using AI in business processes, or planning to implement AI in the next 12 months, are very confident that they/their organisations understand the potential ethical risks of AI.
Importantly, the survey found that 92% of managers currently using AI believe that organisations using AI should have a designated person for Ethical AI. And 80% report having someone in such a role.
Only 67% of managers are very confident of their organisations’ ability to implement AI processes and systems responsibly by taking into account privacy and safety of consumers. This demonstrates that while AI can deliver an enormous range of benefits, there are some key ethical issues that must be addressed.
For example, data bias in AI algorithms can lead to discrimination. User safety can be compromised by malicious players. Customer data privacy also needs to be protected.
That’s why infusing ethics into AI policy is vital now, and as the technology evolves.
Creating an ethical framework
Edward Santow, Australian Human Rights Commissioner, makes a clear distinction between ethics in AI and the rule of law. Speaking in the Salesforce/Observer Research Foundation Infusing Ethics into AI Policy webinar, he said:
“AI is enabling us to do things we’ve always done, but in powerful new ways. As such, there are already human rights, anti-discrimination, and privacy laws in place that should be the first port of call in determining what you can and can’t do with AI.
“Ethical rules are secondary to the law. Ethics can help us uphold the law and fill the gaps where the law is silent.”
To fill those gaps, organisations must set ethical parameters that govern how they develop and use AI-based technologies. Speaking in the same webinar, Kathy Baxter, Architect, Ethical AI Practice at Salesforce, explained what those parameters look like at Salesforce:
“We need to empower our users. To do so, our AI needs to be inclusive and respect the rights of everyone it impacts. So we created an AI charter that lays out what our AI principles are as a company.
“We believe we must safeguard all the data we are entrusted with and ensure what we are building protects and respects human rights. It must be accountable, and we seek and leverage feedback from our customers and civil society groups. Transparency is also important. We must be clear about how we build our models and explain to our users how our AI makes predictions or recommendations.”
Designing ethical AI
This clear ethical framework must be built into the DNA of the AI design and development process. Baxter explained how this is achieved at Salesforce:
“Salesforce works on the agile development methodology. During the very early design stages, we do an assessment with the teams to identify all the intended and unintended consequences of the AI application. We do an analysis of the likelihood and seriousness of the impact, and ask ‘should this application even exist in the first place?’. If the answer is ‘yes’, we identify the strategies we need to put in place to ensure those unintended consequences are mitigated as much as possible.”
However, infusing an ethical framework into the design and development of AI-based technologies may not always be practical. David Hardoon, Senior Advisor on Data & AI, UnionBank of the Philippines, explained during the webinar:
“We need to be careful of the term ‘by design’. If an AI methodology or solution algorithm is applied within a specific context or application, then you can hard code the ethics in. But if you have something that needs to be applied more generally from east to west you have to deliberately allow for certain flexibility. In these cases, we need a second line of defence.”
That’s why Salesforce also builds in user education and guidance to the company’s AI-based applications. Baxter said:
“Salesforce is a platform, so what our customers do with our product may not be directly within our control. But, in the vast majority of cases, harm occurs not through malice but through lack of understanding of the context. We build in guidance and education to make our customers aware of when they are using sensitive fields, how to understand training data, and how to identify if there is a disparate impact occurring.”
Building a second line of defence
Rahul Panicker, Chief Innovation Officer at Wadhwani AI, India, added that this second line of defence can be reinforced in a four-step process:
“When it comes to AI, it’s processes and systems that can save us, not anticipation of unintended consequences. And this is not specific to AI. There is a long history of how to establish processes that ensure safety in technology development.
“Start with control testing for safety. Then you move on to a controlled pilot, and into an uncontrolled pilot that tests whether it works in the real world but still with safeguards in place Finally, the most important step is post-deployment monitoring to catch the consequences we could not anticipate.”
As such, Panicker argued that government regulation of AI should be application focused:
“Self-driving cars need to be regulated differently than healthcare AI, which needs to be regulated differently than AI used in banking. It’s the application domain that identifies the use case, the potential risks, and the stakeholder ecosystem.”
Reaping the full benefits of AI
When developing and using AI-based technologies, organisations must adhere to relevant human rights, anti-discrimination, and privacy laws. Organisations should also create an ethical framework to govern AI development and use the framework where the law is silent.
This ethical framework should be built into the AI design and development process, and where flexibility is required, there should be additional focus on post-deployment monitoring. This will mitigate unintended consequences and ensure that organisations — and the people they serve — will experience the full benefits of AI-based technologies.
Watch the full ‘Infusing Ethics into AI Policy’ webinar here.
Learn more about how Salesforce infuses ethics into AI here.
This post originally appeared on the A.P.-version of the Salesforce blog.