Skip to Content

Ethical AI frameworks, tool kits, principles, and certifications – Oh my!

If you are thinking about incorporating ethics into your company's culture or product development cycle, check out this list of dozens of ethical tools before you try recreating the wheel.

Last updated: September 15, 2022
Originally created: January 14, 2019

[If you’ve visited this page before, Refresh the window to ensure you’re seeing the most recent version.]

Although it may appear that the topic of ethics in AI is brand new, tech ethicists have been around for decades, mostly in academia and non-profits. As a result, dozens of ethical tools have been created. In fact, doteveryone has a 39-page alphabetized directory of resources. If you are thinking about incorporating ethics into your company’s culture or product development cycle, check these out before you try recreating the wheel.

Frameworks

  • OECD Framework for the Classification of AI Systems: a tool for effective AI policies by OECD
    “To help policymakers, regulators, legislators, and others characterise AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective. It can be applied to the widest range of AI systems across the following dimensions: People & Planet; Economic Context; Data & Input; AI model; and Task & Output. Each of the framework’s dimensions has a subset of properties and attributes to define and assess policy implications and to guide an innovative and trustworthy approach to AI as outlined in the OECD AI Principles.”
  • Securing Machine Learning Algorithms by European Union Agency for Cybersecurity (ENISA)
    “Based on a systematic review of relevant literature on machine learning, in this report we provide a taxonomy for machine learning algorithms, highlighting core functionalities and critical stages. The report also presents a detailed analysis of threats targeting machine learning systems. Finally, we propose concrete and actionable security controls described in relevant literature and security frameworks and standards.”
  • Framework for Building AI Systems Responsibly by Microsoft
    “The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust. It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.”
  • Responsible Tech Playbook by Thoughtworks
    “A guide to the tools and practices that help businesses make better technology decisions”
  • Ethical OS Framework by IFTF and Omidyar Network
    “The Ethical Operating System can help makers of tech, product managers, engineers, and others get out in front of problems before they happen. It’s been designed to facilitate better product development, faster deployment, and more impactful innovation. All while striving to minimize technical and reputational risks. This toolkit can help inform your design process today and manage risks around existing technologies in the future.”
  • Responsible AI in Consumer Enterprise by Integrate.ai
    A framework to help organizations operationalize ethics, privacy, and security as they apply machine learning and artificial intelligence
  • UK Data Ethics Framework by UK gov
    Includes principles, guidance, and a workbook to record decisions made
  • An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations by AI4People
    “We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations to assess, to develop, to incentivize, and to support good AI, which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society”
  • Ethics Canvas by ADAPT Centre
    “The Ethics Canvas helps you structure ideas about the ethical implications of the projects you are working on, to visualise them and to resolve them. The Ethics Canvas has been developed to encourage educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns.”
  • A Proposed Model Artificial Intelligence Governance Framework v2 by Singapore Personal Data Protection Commission
    “Singapore is proud to launch the second edition of the Model Framework. This edition incorporates the experiences of organisations that have adopted AI, and feedback from our participation in leading international platforms, such as the European Commission’s High-Level Expert Group and the OECD Expert Group on AI. Such input has enabled us to provide clearer and effective guidance for organisations to implement AI responsibly.”
    Additional Resources at SGDigital
  • Australia’s Artificial Intelligence Ethics Framework by Australian Government Department of Industry, Science, Energy, & Resources
    “The Artificial Intelligence (AI) Ethics Framework guides businesses and governments to responsibly design, develop and implement AI.”
  • A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity – paper by ETH Zurich
  • The Aletheia Framework by  Rolls Royce
    “a toolkit that we believe creates a new global standard for the practical application of ethical AI. Follow the checks and balances within it, and organisations can be sure that their AI project is fair, trustworthy and ethical. We are applying it in our business to accelerate our progress to industry 5.0.”
  • WEFE: The Word Embeddings Fairness Evaluation Framework by Pablo Badilla, Felipe Bravo-Marquez, Jorge Pérez
    Word Embedding Fairness Evaluation (WEFE) is an open source library for measuring bias in word embedding models. It generalizes many existing fairness metrics into a unified framework and provides a standard interface…”
  • The PiE (puzzle-solving in ethics) Model by AI Ethics Lab
    “Ethics is about answering one crucial question: ‘What is the right thing to do?’ In practice, we often seem to forget this main purpose of ethics and get lost in rules, regulations, and approvals. Real ethical questions are like puzzles: We do not know the right answer in these complex situations. The PiE (puzzle-solving in ethics) Model focuses on this main purpose of ethics in a systematic manner, integrating ethical puzzle-solving into the whole process of innovation to ensure that ethical issues are handled in the right way and at the right time.”

Tools and Toolkits

  • Algorithmic Impact Assessment Tool by Canadian Government
    “The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 48 risk and 33 mitigation questions. Assessment scores are based on many factors including systems design, algorithm, decision type, impact and data.”
  • NASSCOM Responsible AI Resource Kit by Indian Government
    “The Responsible AI Resource Kit is the culmination of a joint collaboration between NASSCOM and leading industry partners to foster a responsible self-regulatory regime for AI-led enterprise in India. The Resource Kit comprises sector-agnostic technology and management tools and guidance for AI-led enterprises to grow and scale while prioritising user trust and safety above all.”
  • AI and data protection risk toolkit by UK Government
    “Our AI toolkit is designed to provide further practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by of their own AI systems.”
  • ML Privacy Meter by Data Privacy and Trustworthy Machine Learning Research Lab
    “Machine Learning Privacy Meter: A tool to quantify the privacy risks of machine learning models with respect to inference attacks, notably membership inference attacks”
  • People + AI Research Guidebook by Google
    “A friendly, practical guide that lays out some best practices for creating useful, responsible AI applications.”
  • Model Card Toolkit by Google
    “The Model Card Toolkit (MCT) streamlines and automates generation of Model Cards [1], machine learning documents that provide context and transparency into a model’s development and performance. Integrating the MCT into your ML pipeline enables the sharing model metadata and metrics with researchers, developers, reporters, and more.”
  • System Cards, a new resource for understanding how AI systems work by Facebook
    “This inaugural AI System Card outlines the AI models that comprise an AI system and can help enable a better understanding of how these systems operate based on an individual’s history, preferences, settings, and more.”
  • Playing with AI Fairness: What-if Tool by Google
    “Google’s new machine learning diagnostic tool lets users try on five different types of fairness”
  • Ethics in Tech Toolkit for engineering and design practice by Santa Clara Univ. Markkula Center
    “Each tool performs a different ethical function, and can be further customized for specific applications. Team/project leaders should reflect carefully on how each tool can best be used in their team or project settings.”
  • Responsible Innovation: A Best Practices Toolkit by Microsoft
    “This toolkit provides developers with a set of practices in development, for anticipating and addressing the potential negative impacts of technology on people.”
  • Responsible AI Toolbox by Microsoft
    “A suite of tools for a customized, end-to-end responsible AI experience.”
  • Human-AI eXperience (HAX) Toolkit by Microsoft
    “The Guidelines for Human-AI Interaction provide best practices for how an AI system should interact with people. The HAX Workbook drives team alignment when planning for Guideline implementation. The HAX design patterns save you time by describing how to apply established solutions when implementing the Guidelines. The HAX Playbook helps you identify and plan for common interaction failure scenarios. You can browse Guidelines, design patterns, and many examples in the HAX Design Library.”
  • Microsoft Interpretable ML
    “InterpretML is an open-source python package for training interpretable machine learning models and explaining blackbox systems.”
  • Microsoft Fairlearn: “The fairlearn project seeks to enable anyone involved in the development of artificial intelligence (AI) systems to assess their system’s fairness and mitigate the observed unfairness. The fairlearn repository contains a Python package and Jupyter notebooks with the examples of usage.”
  • Ethics and Algorithms Toolkit by City and County of San Francisco Data Science Team
    SFA risk management framework for governments (and other people too!)
  • AI Fairness & Explainability 360 by IBM
    Open source with case studies, code, and anti-bias algorithms, tutorials, demos & state-of-the-art explainability algorithms (White paper)
  • Aequitas by University of Chicago Center for Data Science and Public Policy
    “The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools.”
  • Design Ethically Toolkit by Kat Zhou (IBM)
    A library of exercises and resources to integrate ethical design into your practice.
  • Algorithmic Accountability Policy Toolkit AI Now Institute
    “The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms.”
  • Lime by the the University of Washington
    Open source toolkit “explaining the predictions of any machine learning classifier.”
  • PWC Responsible AI Toolkit
    “Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner – from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.”
  • Algorithmic Equality Toolkit (AEKit) by ACLU Washington
    “The Algorithmic Equity Toolkit (AEKit for short) is a collection of four components designed to identify surveillance and decision-making technologies used by governments; make sense of how those technologies work; and pose questions about their impacts, effectiveness, and oversight.”
  • The MSW@USC Diversity Toolkit: A Guide to Discussing Identity, Power and Privilege: “This toolkit is meant for anyone who feels there is a lack of productive discourse around issues of diversity and the role of identity in social relationships, both on a micro (individual) and macro (communal) level.”
  • Pymetrics Audit AI
    audit-AI is a tool to measure and mitigate the effects of discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.”
  • World Economic Forum’s AI Board Toolkit
    “Empowering AI Leadership: An Oversight Toolkit for Boards of Directors. This resource for boards of directors consists of: an introduction; 12 modules intended to align with traditional board committees, working groups and oversight concerns; and a glossary of artificial intelligence (AI) terms.”
  • PROBAST: A tool to assess the quality, risk of bias and applicability of prediction model
  • Dynamics of AI Principles by AI Ethics Lab
    “We decided to create the AI Principles Map to help understand the trends, common threads, and differences among numerous sets of principles published.”
  • The Box by AI Ethics Lab
    “The Box is designed to help you visualize the ethical strengths and weaknesses of a technology. Once the weaknesses are identified, solutions can be created!”
  • BLM Privacy Bot: Anonymize photos of BLM protesters by Stanford ML Group

Checklists

Principles

AI Ethics Courses and Certifications

  • Responsible AI Governance Badge Program by EqualAI
    “The EqualAI Badge© Program, in collaboration with the World Economic Forum, prepares senior executives at companies developing or using AI for critical functions to ensure their brand is known for its responsible and inclusive practices. By the end of the program, senior executives will learn the ‘How Tos’ of developing and maintaining responsible AI governance, will join an emerging community and network of like-minded senior executives, and will earn the EqualAI badge of certification for learning best practices for AI governance.”
  • The Ethics of AI by University of Helsinki
    “The Ethics of AI is a free online course created by the University of Helsinki. The course is for anyone who is interested in the ethical aspects of AI – we want to encourage people to learn what AI ethics means, what can and can’t be done to develop AI in an ethically sustainable way, and how to start thinking about AI from an ethical point of view.”
  • Ethics of AI: Safeguarding Humanity by MIT
    “Learn to navigate the ethical challenges inherent to AI development and implementation. Led by MIT thought leaders, this course will deepen your understanding of AI as you examine machine bias and other ethical risks, and assess your individual and corporate responsibilities. Over the course of three days, you’ll address the ethical aspects of AI deployment in your workplace—and gain a greater understanding of how to utilize AI in ways that benefit mankind.”
  • Certified Ethical Emerging Technologist (also on Coursera)
    “Over five courses, our AI founders, ethicists, and researchers will lead you through foundational ethical principles; industry standard ethical frameworks; ethical risk identification and mitigation; effective communication about ethical challenges; and the organizational governance required to create ethical, trusted, and inclusive data-driven technologies. When students complete all five courses, they will be ready to act as ethical leaders, prepared to bridge the gap between theory and practice.”
  • Artificial Intelligence Ethics in Action by LearnQuest (via Coursera)
    “AI Ethics research is an emerging field, and to prove our skills, we need to demonstrate our critical thinking and analytical ability. Since it’s not reasonable to jump into a full research paper with our newly founded skills, we will instead work on 3 projects that will demonstrate your ability to analyze ethical AI across a variety of topics and situations. These projects include all the skills you’ve learned in this AI Ethics Specialization.”
  • AI Ethics: Global Perspectives
    “A Collection of Lectures on the Ethical Implications of Data and Artificial Intelligence from Different Perspectives. New course modules will be released on a monthly basis.”
  • Ethics in AI for Policymakers in Latin America by Inter-American Development Bank
    “The course is aimed at public officials, the private sector, technicians, researchers, teachers, students and all those related to the processes of conceptualization, design, and implementation of an artificial intelligence system for decision-making in the public sector.”
    Note that this course is offered only in Spanish.

Oaths, Manifestoes, and Codes of Conduct

Policy Papers, White Papers, Statements, Reports

Newsletters/eMagazines

Other Resources

Get the latest articles in your inbox.