Inaugural Women in AI Ethics Summit: What happens when you bring together 30 women fighting for human rights in AI?
On December 19th, 2018, 2 dozen women working on AI ethics in tech companies, non-profits, and industry analysts came together for a day to share their experiences and insights, as well as to brainstorm solutions to big challenging we are facing.
We’ve seen far too many examples of artificial intelligence (AI) systems making harmful decisions or recommendations based on race, gender, or other protected classes of data (e.g., interest rates, predictive policing, lending apps, ride hailing services, AI assistants). Increasingly, we are seeing governments in the US, Denmark, China and elsewhere use AI to make decisions about criminal sentencing, who gets access to medical benefits, which children are most at risk of abuse, and who is eligible for schools, jobs, and government contracts slowly moving towards “algocracy” (rule by algorithms). The causes of bias and harm are complex and multi-factor, but there is a large group of women fighting to ensure that AI is created and implemented to help society rather than harm it.
On December 19th, 2018, 25 of those women from tech companies (e.g., Salesforce, Workday, Intel, IBM, Google, Amazon, Socos Labs), non-profits (e.g., Markkula Center for Applied Ethics, Omidyar Network, AI4All, Stanford’s Global Digital Policy Incubator, BSR), and analysts (Altimeter) came together for a day to share their experiences and insights, as well as to brainstorm solutions to big challenging we are facing.
Standing on the shoulders of giants
The day began with a series of lightning talks to share with the group practical experiences that others can leverage in their own work. I gave a presentation sharing my lessons learned standing up the new position, Architect of Ethical AI Practice at Salesforce.
Vivienne Ming, Socos Labs (@neuraltheory)
Dr. Ming spoke on many of the themes highlighted in her recent interview in The Guardian. She shared how she and others work on developing AI for good (e.g., treating diabetes, predicting bipolar depression) but that this always comes with frightening and complex ethical questions. like. In her words, “Technology is only a tool. It is an amazing tool, and one that has had, on balance, a profoundly positive impact on the world. But it can only ever reflect our values back at us. …seemingly innocent technologies can have surprisingly negative effects, such as inequality, capture effects, and instability in social networks. In the end, technology should never simply make us feel good or ease us through our day; it must always challenge us. When we turn technology off we should be better people than when we turned it on.”
Tess Posner, AI4All (@tessposner)
We got a personalized version of Tess’s talk at the Unintended Consequences of Technology event. She shared some disappointing statistics on the diversity crisis multiple sources including Element AI (2018), NSF Science & Engineering indicators (2018), Kapor, ASU, Pivotal Ventures, and AI Index.
- 12% women are AI researchers around the world
- 71% of applicants for AI jobs are male
- 7% women of color with CS Bachelor’s degrees in the U.S
- Nearly 80% of faculty for AI in higher ed institutions are male
- 60% of K-12 schools in the US don’t even have computer science at all
AI4All wants to change this by expanding the diversity pipeline, increasing awareness of and access to AI education, and conducting research in AI for Good applications.
Irina Raicu, Markkula Center for Applied Ethics, Santa Clara University (@iEthics)
I saw a lightning talk that Irina gave at the Partnership on AI (PAI) All Partners Meeting last month and loved it but it went by too quickly and we didn’t have the opportunity to ask questions but that wasn’t the case at the Summit! She talked about the need for “AI-Free Zones.” “The public conversation is full of hype and misinformation about what algorithms can do ‘better’ than humans can. Are there problems or areas of human life in which automated decision-making will not help, and might, in fact, cause more harm? If so, what might those be, and how should we improve the conversation?” Potential areas: parenting, relationships, religion/faith.
Susan Etlinger, Altimeter Group (@setlinger)
Susan, who recently published an AI Maturity Playbook, shared with us the Five AI Trends to Watch for in 2019:
- How we interact: from screens to senses
- How we decide: from business rules to probabilities
- How we innovate: from data analytics to data science to data engineering
- How we lead: from expertise-driven to data-driven
- How we behave: From “Move fast and break things” to “Ethical AI”
She also made predictions about three possible outcomes for 2019:
- Jumping the Shark: “Inspired by Microsoft, Salesforce, IBM and others, more companies issue AI ethics principles–but stop there. Virtue-signaling in place of actual progress breeds industry and media cynicism and inevitable backlash.”
- Half measures: “Fig-Leaf. A few companies, nonprofits and academic institutions do the heavy lifting on AI ethics, and companies adopt bits and pieces of their frameworks and tools. Not much happens, but it feels like progress. Everybody declares victory and nothing changes.”
- “The Big Dig”: “2018 turns out to have been the turning-point for real progress toward ethical AI. Companies like Microsoft, IBM, and Intel scale AI efforts and provide accessible, useful frameworks that other businesses adopt, along with frameworks from AInow, MIT and others.”
Priya Vijayarajendran, IBM (@vcPriya)
Vocabulary is always a topic of discussion in the AI Ethics world. Words like “fair,” “bias,” “transparent,” and “ethical” can mean different things to different people. Priya highlighted the difference between two concepts that are important to distinguish when discussing AI fairness or bias checking tools:
- Fairness metrics: These can be used to check for bias in machine learning workflows.
- Bias mitigators: These overcome bias in the workflow once discovered to produce a more fair outcome.
IBM’s has an impressive set of open source AI Fairness resources (AI Fairness 360 Open Source Toolkit) including tools to check for bias and to overcome it. However, tools alone are not enough. “AI is not only a technical problem.” We must change the incentive structure within our organization. Most for-profit companies incentive employees based on revenue, clicks, user adoption, etc., which can be counter to making difficult but ethical decisions. As a group we discussed what those might look like (e.g., reward when a project is canceled due to concerns about societal impact, reward when a sales rep. identifies a potential customer that would violate the company’s values).
Chloe Autio (@ChloeAutio), Heather Patterson (@h2pi), and Iman Saleh (@iman_saleh), Intel
Chloe presented on behalf of her two colleagues, Heather and Iman, who were sadly out sick. They shared some of the best practices for integrating AI Ethics into the product life cycle as well as some of the tools and capabilities to ensure AI fairness by detecting and fixing bias.
- Generate interest and attention from leadership across organizations
- Situate human stories within a broader context
- Personalize it: Make a case for prioritizing a particular set of ethical problems
- Decide on a vision and map out a path to success
- Test with trusted colleagues first and then iterate
Intel offers many courses in ethics for AI for data scientists and non-data scientists. A discussion among the group followed about how to make training courses engaging and accessible by (e.g., existing knowledge, learning style, time, location). How do you know if someone retained the training and is applying it (i.e., what’s the impact of the course)?
Driving change in your organization
Several ideas were brainstormed at the end of the day about potential ways one might drive change in their organization. Not all of them make sense for all organizations or should be attempted all at once but this is a great list to get ideas from!
- Confidential employee surveys to identify teams/areas where things are going well and those where they are not. What is working or not working on these teams?
- Measure actions in terms of company values or ethical principles, not “ethics,” and be clear in your vocabulary so everyone is using the same measuring stick.
- Identify how addressing ethical concerns impacts the bottom line
- Write press release and FAQ at the start of projects to imagine good and bad scenarios. Help teams understand the HUMAN and societal impact of what they are building.
- Conduct ethical pre- and post-mortems to identify potential unintended consequences and use cases, and how to mitigate them
- Advocate for an opportunity for individual contributors to present to board members
- Create Ethical Red Teams to identify unintended consequences or use cases that team members too close to a project/product might not be able to see. Treat ethical holes with the same priority as security holes. If there aren’t enough resources to create a dedicated team, rotate among team members each release to play the role of advisory.
- Get an executive to sponsor ethical efforts (e.g., training, Red Teams) and communicate their support to the entire company
I couldn’t be happier with how the first Women In AI Ethics Summit went and I look forward to many more to come! If you want to participate in future AI Ethics Summits, please let me know!
Acknowledgements
A special thank you to Mia Dand at Lighthouse for raising everyone’s awareness of the brilliant women in AI Ethics and to Danielle Cass, Director of Ethical AI at Workday for suggesting this event and recruiting many of the brilliant women in the room! And thank you to all of the women who joined us to share their ideas, experience, energy, and light!
- Michelle Carney, Amazon Music
- Hannah Darnton, BSR (Business for Social Responsibility)
- Kana Hammon, Omidyar Network
- Emily Witt, Salesforce
- Bulbul Gupta, Socos Labs
- Shannon Vallor, Markkula Center for Applied Ethics, Santa Clara University
- Roya Pakzad, Stanford’s Global Digital Policy Incubator
- Allison Woodruff, Google
- Hannah Darnton, BSR
- Barbara Cosgrove, Workday
- Katharine Bierce, Salesforce.org
- Yakaira Nunez, Salesforce
- Susan Etlinger, Altimeter Group
- Chloe Autio, Intel
- Iman Saleh, Intel
- Heather M. Patternson, Intel
- Tess Posner, AI4All
- Priya Vijayarajendran, IBM
- Irina Raicu, Markkula Center for Applied Ethics, Santa Clara University
- Vivienne Ming, Socos Labs