Did you know that artificial intelligence can lock-in bias and discrimination? This is a particular problem for government agencies entrusted to serve populations fairly. On June 20th, REI Systems and Johns Hopkins University (JHU) hosted a Government Analytics Breakfast (GAB) Forum to explore bias in artificial intelligence, and what to do about it.
The GAB Forum events bring together professionals from academia, government, and industry to discuss successes, problems, techniques, and lessons-learned around data-driven decision making in the public sector. REI is pleased to partner with JHU’s Program in Government Analytics to organize the GAB Forum series.
At the June GAB Forum, speakers Miriam McKinney and Andrew Nicklin, both from the JHU Centers for Civic Impact (formerly known as the Center for Government Excellence, or GovEX), discussed “Artificial Intelligence how algorithms manifest our own biases: Reducing harm by mitigating risks.”
McKinney presented examples of instances when algorithms have hurt more than helped the agencies that use them. Adverse media attention points out the problem, as illustrated by example headlines shared by the two presenters, such as “Amazon scraps secret AI recruiting tool that shows bias against women,” or “Google photo tags two African-Americans as gorillas through facial recognition software.” McKinney asserts that all people have bias, therefore all data have bias, therefore all algorithms have bias. Then the question was posed, how do you recognize bias, and minimize or avoid it so as to ensure that your algorithm won’t land your agency in the headlines?
McKinney and Niklin of GovEX, alongside the City and County of San Francisco, the Harvard Kennedy School’s DataSmart initiative, and Data Community DC collaborated on a practical toolkit/framework to help organizations understand the implications of using an algorithm, clearly articulate the potential risks, and identify the ways to mitigate those risks – the Ethics & Algorithms Toolkit. The toolkit is broken into 2 main parts, Part 1: Assessing the Algorithm Risk where agencies answer a series of questions which lead to Part 2: Strategies.
The team believes by using risk management, a tool with which many governments are already familiar, agencies can identify and qualify certain levels of risks and identify specific actions to take, keeping an agency in the headlines for only the right reasons. For more information or to try out the toolkit, visit the Ethics and Algorithms Toolkit website.
REI Systems is proud to be in the foreground of the government’s new and upcoming forays in artificial intelligence as both a facilitator and IT specialist. If you have questions about REI Systems or believe you have a topic of interest for the next GAB Forum, please email firstname.lastname@example.org.