On December 19th, 2018, 2 dozen women working on AI ethics in tech companies, non-profits, and industry analysts came together for a day to share their experiences and insights, as well as to brainstorm solutions to big challenging we are facing.
This post is for those responsible for implementing AI systems but do not have a background in data science, statistics, or probability. The intention is to create an approachable introduction to the key concepts to identify potential bias (error) in their training data.
What kind of world do we want for ourselves, those we love, and future society? How do we are organizations, employees, customers, and members of society ensure that technology is in service of society and not the other way around?
In this article, I will focus on mechanisms for removing exclusion from your data and algorithms. Of all the interventions or actions we can take, the advancements here have the highest rate of change.
This is part one of a two-part series about how to build ethics into AI. Part one focuses on cultivating an ethical culture in your company and team, as well as being transparent within your company and externally.