A win with Amazon for AI ethics watchdogs

Amazon has faced a lot of flack for AI bias—its own internal recruitment tool (scrapped because of its sexist outputs) and the bias inherent in Rekognition, the company’s facial recognition tool. As people become more aware of AI bias, technology companies have developed tools to help data scientists debias datasets, monitor for bias outside of preset thresholds and increase explainability and transparency. In a short period of time, debiasing has become coding “table stakes” according to John C. Havens, lead for the IEEE Global Initiative on AI Ethics.

Debiasing and the goal of “fairness” is not only a technical issue. While there are algorithmic ways to determine fairness, there are certain combinations of fairness criteria that cannot be achieved simultaneously. So it’s left to humans to decide which fairness measure is appropriate given the goals of the system. As Michael Kearns and Aaron Roth say in The Ethical Algorithm: “These are the subjective, normative decisions that cannot (fruitfully) be made scientific, and in many ways they are the most important decisions.”

This view is echoed by Sandra Wachter of the Oxford Internet Institute. Wachter is an expert on European law and AI discrimination. Her most recent work ‘Why Fairness Cannot Be Automated: Bridging the gap between EU non-discrimination law and AI’ puts forward important ideas on the role of humans versus machines in determining fairness, specifically how AI disrupts people’s ability to rely on intuition. Judges use their intuition and common sense when it comes to assessing “contextual equality” to decide whether someone has been treated unfairly under the law. The agility of the EU legal system is described by Wachter as a “feature not a bug.” Courts generally “don’t like statistics,” because they can easily lie and tend to skew “equality of weapons,” handing the advantage to those who are better resourced. “Common sense” is part of the deal but when discrimination is caused by algorithms that process data in multi-dimensional space, common sense can fall apart. Experts need technical measurements that help them navigate new and emergent grey areas.

Wachter et al proposed a new test for ensuring fairness in algorithmic modelling and data driven decisions, called ‘Conditional Demographic Disparity’ (CDD). The test is significant as it aligns with the approach used by courts across Europe in applying non-discrimination law and can be used to help algorithmically define fairness. The anti-discrimination test developed by the Oxford AI experts helps users look for bias in their AI systems and is particularly relevant for those seeking help in detecting unintuitive and unintended biases as well as heterogenous, minority-based and intersectional discrimination.

It’s positive news to find that this work has been rapidly adopted by Amazon and included in AWS’s AI bias detection features within Sagemaker.

It even seems that Wachter was pleasantly surprised at the win for champions of AI that doesn’t discriminate. “I’m incredibly excited to see our work being implemented by Amazon Web Services as part of their cloud computing offering. I’m particularly proud of the way our anti-bias test can be used to detect evidence of intersectional discrimination, which is an area that is often overlooked by developers of AI and machine learning systems.

“It’s less than a year since we published our paper, ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination’ and it’s very rewarding to see our work having such impact.”

Share on email
Share on facebook
Share on linkedin
Share on twitter