Technology Artificial Intelligence Social Justice

Unmasking AI: My Mission to Protect What Is Human in a World of Machines (Summary)

by Joy Buolamwini

As an MIT graduate student, Joy Buolamwini was working with facial analysis software. But the AI couldn't detect her face. After trying everything, she did something absurd: she put on a plain white mask. Instantly, the software recognized a face. This shocking moment revealed a dangerous truth: the algorithms shaping our world are often blind to Black women, and she was the person to prove it.

AI Has a 'Coded Gaze' Problem

Many AI systems are trained on datasets that are overwhelmingly white and male. This creates a 'coded gaze,' where the technology is significantly less accurate for women and people of color, rendering them invisible or misidentified.

Buolamwini's groundbreaking 'Gender Shades' study tested commercial AI systems from Microsoft, IBM, and Face++. The audit revealed error rates of less than 1% for lighter-skinned men, but for darker-skinned women, the error rates soared to as high as 34.7%.

Algorithmic Harm Isn't Hypothetical—It's Happening Now

The consequences of biased AI are not a future problem. Flawed algorithms are being used today in high-stakes decisions for hiring, loan applications, and criminal justice, leading to wrongful arrests and systemic discrimination.

The book highlights the case of Robert Williams, a Black man from Michigan who was wrongfully arrested in front of his wife and two young daughters. The sole evidence was a false match from a facial recognition system used by Detroit police, a real-world consequence of the inaccuracies Buolamwini had warned about.

We Can't Trust What We Can't Audit

Tech companies often treat their algorithms as proprietary black boxes, resisting independent scrutiny. Buolamwini argues that accountability requires external, independent audits to ensure AI systems are safe, fair, and effective before they are deployed on the public.

After Buolamwini published her research, IBM and Microsoft were forced to acknowledge the flaws and work to improve their systems. However, Amazon initially dismissed the findings, demonstrating the corporate resistance that makes independent auditing and regulation essential for public safety.

Inclusivity Isn't Just a Feature, It's the Foundation

Fixing biased AI requires more than just adding diverse data. It demands a fundamental shift toward creating technology with empathy, foresight, and a focus on potential harms, a concept Buolamwini calls 'excoding'—coding for exclusion.

Instead of just asking 'What can this technology do?', developers should ask 'How can this technology be used to harm people?'. For example, when building a facial recognition system, one should proactively 'ex-code' for its potential use in authoritarian surveillance or racial profiling and build in safeguards from the start.

Go deeper into these insights in the full book:
Buy on Amazon
Listen to the full audio book with an Audible Free Trial.
As an Amazon Associate, qualifying purchases help support this site.