Last updated: September 15, 2022
Originally created: January 14, 2019
[If you’ve visited this page before, Refresh the window to ensure you’re seeing the most recent version.]
Although it may appear that the topic of ethics in AI is brand new, tech ethicists have been around for decades, mostly in academia and non-profits. As a result, dozens of ethical tools have been created. In fact, doteveryone has a 39-page alphabetized directory of resources. If you are thinking about incorporating ethics into your company’s culture or product development cycle, check these out before you try recreating the wheel.
Frameworks
- OECD Framework for the Classification of AI Systems: a tool for effective AI policies by OECD
“To help policymakers, regulators, legislators, and others characterise AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output. Each of the framework’s dimensions has a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles.” - Securing Machine Learning Algorithms by European Union Agency for Cybersecurity (ENISA)
“Based on a systematic review of relevant literature on machine learning, in this report we provide a taxonomy for machine learning algorithms, highlighting core functionalities and critical stages. The report also presents a detailed analysis of threats targeting machine learning systems. Finally, we propose concrete and actionable security controls described in relevant literature and security frameworks and standards.” - Framework for Building AI Systems Responsibly by Microsoft
“The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.” - Responsible Tech Playbook by Thoughtworks
“A guide to the tools and practices that help businesses make better technology decisions” - Ethical OS Framework by IFTF and Omidyar Network
“The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks. This toolkit can help inform your design process today and manage risks around existing technologies in the future.” - Responsible AI in Consumer Enterprise by Integrate.ai
A framework to help organizations operationalize ethics, privacy, and security as they apply machine learning and artificial intelligence - UK Data Ethics Framework by UK gov
Includes principles, guidance, and a workbook to record decisions made - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations by AI4People
“We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations to assess, to develop, to incentivize, and to support good AI, which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society” - Ethics Canvas by ADAPT Centre
“The Ethics Canvas helps you structure ideas about the ethical implications of the projects you are working on, to visualise them and to resolve them. The Ethics Canvas has been developed to encourage educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns.” - A Proposed Model Artificial Intelligence Governance Framework v2 by Singapore Personal Data Protection Commission
“Singapore is proud to launch the second edition of the Model Framework. This edition incorporates the experiences of organisations that have adopted AI, and feedback from our participation in leading international platforms, such as the European Commission’s High-Level Expert Group and the OECD Expert Group on AI. Such input has enabled us to provide clearer and effective guidance for organisations to implement AI responsibly.”
Additional Resources at SGDigital - Australia’s Artificial Intelligence Ethics Framework by Australian Government Department of Industry, Science, Energy, & Resources
“The Artificial Intelligence (AI) Ethics Framework guides businesses and governments to responsibly design, develop and implement AI.” - A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity – paper by ETH Zurich
- The Aletheia Framework by Rolls Royce
“a toolkit that we believe creates a new global standard for the practical application of ethical AI. Follow the checks and balances within it, and organisations can be sure that their AI project is fair, trustworthy and ethical. We are applying it in our business to accelerate our progress to industry 5.0.” - WEFE: The Word Embeddings Fairness Evaluation Framework by Pablo Badilla, Felipe Bravo-Marquez, Jorge Pérez
“Word Embedding Fairness Evaluation (WEFE) is an open source library for measuring bias in word embedding models. It generalizes many existing fairness metrics into a unified framework and provides a standard interface…” - The PiE (puzzle-solving in ethics) Model by AI Ethics Lab
“Ethics is about answering one crucial question: ‘What is the right thing to do?’ In practice, we often seem to forget this main purpose of ethics and get lost in rules, regulations, and approvals. Real ethical questions are like puzzles: We do not know the right answer in these complex situations. The PiE (puzzle-solving in ethics) Model focuses on this main purpose of ethics in a systematic manner, integrating ethical puzzle-solving into the whole process of innovation to ensure that ethical issues are handled in the right way and at the right time.”
Tools and Toolkits
- Algorithmic Impact Assessment Tool by Canadian Government
“The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 48 risk and 33 mitigation questions. Assessment scores are based on many factors including systems design, algorithm, decision type, impact and data.” - NASSCOM Responsible AI Resource Kit by Indian Government
“The Responsible AI Resource Kit is the culmination of a joint collaboration between NASSCOM and leading industry partners to foster a responsible self-regulatory regime for AI-led enterprise in India. The Resource Kit comprises sector-agnostic technology and management tools and guidance for AI-led enterprises to grow and scale while prioritising user trust and safety above all.” - AI and data protection risk toolkit by UK Government
“Our AI toolkit is designed to provide further practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by of their own AI systems.” - ML Privacy Meter by Data Privacy and Trustworthy Machine Learning Research Lab
“Machine Learning Privacy Meter: A tool to quantify the privacy risks of machine learning models with respect to inference attacks, notably membership inference attacks” - People + AI Research Guidebook by Google
“A friendly, practical guide that lays out some best practices for creating useful, responsible AI applications.” - Model Card Toolkit by Google
“The Model Card Toolkit (MCT) streamlines and automates generation of Model Cards [1], machine learning documents that provide context and transparency into a model’s development and performance. Integrating the MCT into your ML pipeline enables the sharing model metadata and metrics with researchers, developers, reporters, and more.” - System Cards, a new resource for understanding how AI systems work by Facebook
“This inaugural AI System Card outlines the AI models that comprise an AI system and can help enable a better understanding of how these systems operate based on an individual’s history, preferences, settings, and more.” - Playing with AI Fairness: What-if Tool by Google
“Google’s new machine learning diagnostic tool lets users try on five different types of fairness” - Ethics in Tech Toolkit for engineering and design practice by Santa Clara Univ. Markkula Center
“Each tool performs a different ethical function, and can be further customized for specific applications. Team/project leaders should reflect carefully on how each tool can best be used in their team or project settings.” - Responsible Innovation: A Best Practices Toolkit by Microsoft
“This toolkit provides developers with a set of practices in development, for anticipating and addressing the potential negative impacts of technology on people.” - Responsible AI Toolbox by Microsoft
“A suite of tools for a customized, end-to-end responsible AI experience.” - Human-AI eXperience (HAX) Toolkit by Microsoft
“The Guidelines for Human-AI Interaction provide best practices for how an AI system should interact with people. The HAX Workbook drives team alignment when planning for Guideline implementation. The HAX design patterns save you time by describing how to apply established solutions when implementing the Guidelines. The HAX Playbook helps you identify and plan for common interaction failure scenarios. You can browse Guidelines, design patterns, and many examples in the HAX Design Library.” - Microsoft Interpretable ML
“InterpretML is an open-source python package for training interpretable machine learning models and explaining blackbox systems.” - Microsoft Fairlearn: “The fairlearn project seeks to enable anyone involved in the development of artificial intelligence (AI) systems to assess their system’s fairness and mitigate the observed unfairness. The fairlearn repository contains a Python package and Jupyter notebooks with the examples of usage.”
- Ethics and Algorithms Toolkit by City and County of San Francisco Data Science Team
SFA risk management framework for governments (and other people too!) - AI Fairness & Explainability 360 by IBM
Open source with case studies, code, and anti-bias algorithms, tutorials, demos & state-of-the-art explainability algorithms (White paper) - Aequitas by University of Chicago Center for Data Science and Public Policy
“The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools.” - Design Ethically Toolkit by Kat Zhou (IBM)
A library of exercises and resources to integrate ethical design into your practice. - Algorithmic Accountability Policy Toolkit – AI Now Institute
“The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms.” - Lime by the the University of Washington
Open source toolkit “explaining the predictions of any machine learning classifier.” - PWC Responsible AI Toolkit
“Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner – from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.” - Algorithmic Equality Toolkit (AEKit) by ACLU Washington
“The Algorithmic Equity Toolkit (AEKit for short) is a collection of four components designed to identify surveillance and decision-making technologies used by governments; make sense of how those technologies work; and pose questions about their impacts, effectiveness, and oversight.” - The MSW@USC Diversity Toolkit: A Guide to Discussing Identity, Power and Privilege: “This toolkit is meant for anyone who feels there is a lack of productive discourse around issues of diversity and the role of identity in social relationships, both on a micro (individual) and macro (communal) level.”
- Pymetrics Audit AI
“audit-AI
is a tool to measure and mitigate the effects of discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.” - World Economic Forum’s AI Board Toolkit
“Empowering AI Leadership: An Oversight Toolkit for Boards of Directors. This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms.” - PROBAST: A tool to assess the quality, risk of bias and applicability of prediction model
- Dynamics of AI Principles by AI Ethics Lab
“We decided to create the AI Principles Map to help understand the trends, common threads, and differences among numerous sets of principles published.” - The Box by AI Ethics Lab
“The Box is designed to help you visualize the ethical strengths and weaknesses of a technology. Once the weaknesses are identified, solutions can be created!” - BLM Privacy Bot: Anonymize photos of BLM protesters by Stanford ML Group
Checklists
- EU High-Level Expert Group on Artificial Intelligence: Trustworthy AI Assessment List
- Deon
“deon is a command line tool that allows you to easily add an ethics checklist to your data science projects. We support creating a new, standalone checklist file or appending a checklist to an existing analysis in many common formats.” - O’Reilly: Data Ethics Checklist
- Ten Simple Rules for Responsible Big Data Research
“This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.”
Principles
- OECD AI Principles
“The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time.” - Linking Artificial Intelligence Principles by Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences
“The following table presents an analysis of different AI Principles world wide (currently 50 proposals) from the perspective of coarser topics, which shows mainly on the consensus of various proposals. The current LAIP engine enables us to list and compare between different AI principle proposals at keywords, topic and paragraph levels. Here we have the paper “Linking Artificial Intelligence Principles” that gives more details on the design philosophy and initial observations.” - Visualization of AI and Human Rights by Berkman Klein Center
“Our data visualization presents thirty-two sets of principles side by side, enabling comparison between efforts from governments, companies, advocacy groups, and multi-stakeholder initiatives.” - AI Ethics Guidelines Global Inventory by Algorithm Watch
“With our AlgorithmWatch AI Ethics Guidelines Global Inventory, we started to map the landscape of these frameworks.” - Using Artificial Intelligence and Algorithms by the Federal Trade Commission
“The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms.” - UN Guiding Principles on Business and Human Rights by UN Human Rights Office of the High Commission
Implementing the United Nations “Protect, Respect and Remedy” Framework - Draft AI Ethics Guidelines For Trustworthy AI by European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG)
Includes a framework for Trustworthy AI , principles and values, methods, and questions for assessing trustworthy AI. - Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI by Harvard
“Our desire for a way to compare these documents – and the individual principles they contain – side by side, to assess them and identify trends, and to uncover the hidden momentum in a fractured, global conversation around the future of AI, resulted in this white paper and the associated data visualization.” - The Ethics Center Ethical by Design: Principles for Good Technology
- FAT/ML Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
- RANZCR Ethical Principles for AI in Medicine – Consultation by (Royal Australian and New Zealand College of Radiologists)
- Asilomar AI Principles
- Principles for Digital Development
- Partnership on AI Tenets
AI Ethics Courses and Certifications
- Responsible AI Governance Badge Program by EqualAI
“The EqualAI Badge© Program, in collaboration with the World Economic Forum, prepares senior executives at companies developing or using AI for critical functions to ensure their brand is known for its responsible and inclusive practices. By the end of the program, senior executives will learn the ‘How Tos’ of developing and maintaining responsible AI governance, will join an emerging community and network of like-minded senior executives, and will earn the EqualAI badge of certification for learning best practices for AI governance.” - The Ethics of AI by University of Helsinki
“The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view.” - Ethics of AI: Safeguarding Humanity by MIT
“Learn to navigate the ethical challenges inherent to AI development and implementation. Led by MIT thought leaders, this course will deepen your understanding of AI as you examine machine bias and other ethical risks, and assess your individual and corporate responsibilities. Over the course of three days, you’ll address the ethical aspects of AI deployment in your workplace—and gain a greater understanding of how to utilize AI in ways that benefit mankind.” - Certified Ethical Emerging Technologist (also on Coursera)
“Over five courses, our AI founders, ethicists, and researchers will lead you through foundational ethical principles; industry standard ethical frameworks; ethical risk identification and mitigation; effective communication about ethical challenges; and the organizational governance required to create ethical, trusted, and inclusive data-driven technologies. When students complete all five courses, they will be ready to act as ethical leaders, prepared to bridge the gap between theory and practice.” - Artificial Intelligence Ethics in Action by LearnQuest (via Coursera)
“AI Ethics research is an emerging field, and to prove our skills, we need to demonstrate our critical thinking and analytical ability. Since it’s not reasonable to jump into a full research paper with our newly founded skills, we will instead work on 3 projects that will demonstrate your ability to analyze ethical AI across a variety of topics and situations. These projects include all the skills you’ve learned in this AI Ethics Specialization.” - AI Ethics: Global Perspectives
“A Collection of Lectures on the Ethical Implications of Data and Artificial Intelligence from Different Perspectives. New course modules will be released on a monthly basis.” - Ethics in AI for Policymakers in Latin America by Inter-American Development Bank
“The course is aimed at public officials, the private sector, technicians, researchers, teachers, students and all those related to the processes of conceptualization, design, and implementation of an artificial intelligence system for decision-making in the public sector.”
Note that this course is offered only in Spanish.
Oaths, Manifestoes, and Codes of Conduct
- ACM Code of Ethics & Professional Conduct
- IEEE Code of Ethics
- Data Science Oath by The National Academies of Science, Engineering, & Medicine
- Ethical Codex for Data-Based Value Creation by Swiss Alliance of Data-intensive Services
- Ethical Design Manifesto
- Manifesto for Data Practices
- NeverAgain.Tech Pledge
- The Programmer’s Oath
Policy Papers, White Papers, Statements, Reports
- Bridging AI’s trust gaps: Aligning policymakers and companies (2020 Report) The Future Society
- Ethically Aligned Design for AI by IEEE
- Classical Ethics in Autonomous & Intelligent Systems by IEEE
- Multistakeholder AI development: 10 building blocks for inclusive policy design by UNESCO
- The Toronto Declaration Access Now/RightsCon
- Montreal Declaration for Responsible AI by University of Montreal
- Responsible Use of Technology (white paper) by WEF
- Reimagining Regulation for the Age of AI: New Zealand Pilot Project by WEF
- AI Government Procurement Guidelines by WEF
- European Group on Ethics in Science and New Technologies Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems
- Advancing racial literacy in tech report by Data & Society
- Dynamics of AI Principles: The Big Picture by AI Ethics Lab
“We decided to create the AI Principles Map to help understand the trends, common threads, and differences among numerous sets of principles published.” - Intel AI policies white paper by Intel
- Privacy, Data and Technology: Human Rights Challenges in the Digital Age by New Zealand Human Rights Commission
- Governing Artificial Intelligence: Upholding human rights and dignity by Mark Latonero (Data & Society)
- State of AI Ethics Report by Montreal AI Ethics Institute
- Advancing AI ethics beyond compliance: From principles to practice 2019 report by IBM
- “Operationalizing AI Ethics Principles” by AI Ethics Lab
Newsletters/eMagazines
- AI4All
- AI Essentials
- AI Ethics Brief, Montreal AI Ethics Institute
- AI Ethics Weekly
- AI News Weekly
- AI Now Institute
- AI Weekly , VentureBeat
- The Algorithm, MIT Technology Review
- Algorithm Watch
- Axios: Login
- Becoming Human.ai: Artificial Intelligence Magazine
- Berkman Klein Center
- Data & AI, O’Reilly
- Data & Society Points
- DeepMind Ethics & Society
- Emerging Tech Brew
- Eye on AI, Fortune
- Fast Company
- Feminist.ai
- Good AI
- Inclusive Design, Microsoft
- MIT Media Lab
- Omidiyar Network
- One Zero
- People + AI Research, Google
- The Relay, Alix Dunn
- Salesforce AI Ethics
- Stanford HAI
- This Week in AI, DeepAI
- Towards Data Science
- Wall Street Journal Pro Artificial Intelligence
- Women in AI (WAI)
- World Economic Forum (WEF)
Other Resources
- Data for Society: Explore open datasets and learn how other researchers have used them to solve societal challenges by Microsoft
- Ethical AI Maturity Model by Salesforce
- RAII white paper, certification guidebook, and certification program scheme document sample by Responsible AI Institute
- Learning from the past to create Responsible AI: A collection of controversial, and often unethical AI use cases by Roman Lutz
- Awful AI: A curated list to track current scary usages of AI – hoping to raise awareness by David Dao
- THE IEEE AI IMPACT USE CASES INITIATIVE
- Indigenous Protocol and AI – Reading List
- Readings in AI Ethics by Markkula Center for Applied Ethics
- Ethics in Tech Practice Workshop by Markkula Center for Applied Ethics
- Algorithmic Impact Assessment by Canadian Government
- An introduction to software engineering ethics by Santa Clara Univ. Markkula Center
- The Foundation for Best Practices in Machine Learning: Championing ethical and responsible machine learning through open-source best practices.
- Ethical AI Resources by AIArtists.org
- Design, Ethics & AI: Practical activities for data scientists and other designers by IDEO
- AI Ethical Compass by IDEO
- Fast.ai resources including 11 Short Videos About AI Ethics by Rachel Thomas
- Unpacking “Ethical AI”: A curated reading list by Emanuel (Manny) Moss
- Atlas of Surveillance by EFF
“The Atlas of Surveillance is a database of the surveillance technologies deployed by law enforcement in communities across the United States. This includes drones, body-worn camera, automated license plate readers, facial recognition, and more.” - AI Myths by AIMyths.org