Global spending on Artificial Intelligence (AI) is expected to double by 2024, reaching $110 billion. But the lack of governance, transparency, and involvement from underrepresented communities in the development of AI can lead to biases and misuse. As AI evolves and seeps into more and more aspects of human life, focusing on the ethical use of AI becomes critical to promoting equality and the protection of basic human rights such as privacy.
Ethical risks surrounding AI today
The most common ethical risks associated with the use of AI are around:
1. Bias: Today’s AI systems are brilliant, yet incapable of making ethical or moral decisions independently. Without proper laws and guidelines around AI development and algorithmic training, AI systems can lead to discrimination. Consider the fallibility of facial recognition systems that fail to recognise specific communities with distinct features or skin colour. Another example of AI bias is in loan assessment systems that may discredit certain borrowers simply based on their race or gender, making it harder for them to access essential financial services.
2. Liability: Biases and incorrect data in AI systems can lead to misinformed decisions and actions. We lack, and urgently need more robust governance and policies to determine who should be held responsible for the impact such AI systems create on humans and the environment. For example, deciding who would be responsible for damages caused by AI-powered automated vehicles.
3. Ownership: Some advanced AI systems can produce artefacts that can be considered intellectual property — like poetry, paintings, or even other computer applications. We do not have adequate laws to determine who ‘owns’ these.
4. Autonomy: AI-driven automation helps businesses streamline complex processes and consumer experiences. But how much autonomy should AI systems have? The implications of this are serious when we go beyond simpler personalisation tactics. We need stricter checks and balances to ensure that AI systems in critical areas such as healthcare or finance are controlled and error-free.
5. Lack of transparency: People’s understanding of technology’s inner workings and legalities vary. It is the responsibility of entities involved in the development and use of AI to uphold the rights of everyone that technology impacts. Without transparent organisational policies around design, sources of data, and purpose or usage, AI can be used in unethical practices such as:
- collecting consumer data without consent, or even conducting surveillance
- crushing the competition to create monopolies in markets
- selling restricted data or feeding skewed information to influence people’s decision-making
6. Balancing unemployment and the creation of new jobs: About 30% of jobs will be lost to automation by 2030. At the same time, AI is expected to add $15 trillion to global GDP. While AI will eliminate low-paying mechanical jobs, it can enhance human capacity to perform more high-value tasks that machines cannot; potentially creating thousands of new high-paying jobs worldwide.
7. Threat to human existence: Despite what Hollywood tells us, most researchers say the possibility of AI (with human-like intellectual capabilities) taking over the world is slim. But research in this direction still needs to be regulated. More important is, from a psychosocial perspective, to carefully address such fears of the general public.
8. Environmental impact: AI can be used to regulate energy production, distribution, and usage, making the sector more sustainable. But we cannot ignore the emissions from massive amounts of energy consumed by data centres and other systems that support AI functioning. These cause their own problems.
How can we make the use of AI more ethical?
There aren’t many regulatory bodies and principles to implement the ethical use of AI across the world. Some guiding principles are provided by institutions like the Organisation for Economic Cooperation and Development (OECD) and Partnerships on AI. But most businesses are left to self-regulate, making standardisation and governance a challenge.
The situation is similar in India. Niti Aayog, India’s Planning Commission, has proposed a draft for the responsible use of AI in the country. It provides broad principles for organisations that develop and use AI-based solutions.
According to a YouGov survey commissioned by Salesforce, the government in India has created regulations around AI for that businesses must comply with. This is a cause of concern for 93% of managers from organisations using or planning to use AI systems as it would require these organisations to be legally responsible for the impact that technology can create.
Only two-thirds of managers are confident in their understanding of ethical risks surrounding AI, and their organisation’s ability to responsibly implement AI processes and systems. And while 92% of them believe organisations using AI should appoint someone to ensure the ethical use of AI, only 80% of these organisations have filled such a role.
To derive the most benefit from AI technology and avoid being unintentionally damaging to our communities, all stakeholders – governments, developers, users, researchers, activists – need to come together and put ethical AI principles into action.
Salesforce drives business success with the ethical use of AI
Salesforce believes in the transformative powers of technology, and also being mindful of its risks. We identify the following five principles as the foundation for AI:
- Responsibility – for safeguarding human rights and customer data through stricter compliance
- Accountability – that comes from seeking and implementing feedback to continuously improve technology and the policies surrounding it
- Transparency – in the use of data and how machine-driven, personalised user experiences are provided
- Empowerment – of customers, their employees and customers, and society with the responsible use of technology for faster economic growth
- Inclusivity – imbibed by respecting the societal values—around privacy, communication, etc.— of developers and all those impacted by technology
We strive to build trust among customers through products safe for, inclusive of, and accessible to all. We established the Office of Ethical and Humane Use of Technology to implement our AI principles and build trust between AI developers and the people technology impacts. We also partner with various organisations to share best practices, create higher standards, and increase inclusivity in technology.
Click here to know more about Salesforce’s efforts towards responsible and humane use of technology.