Technology Artificial Intelligence Critical Thinking

AI Snake Oil: How to Tell the Difference Between Real and Fake AI (Summary)

by Arvind Narayanan & Sayash Kapoor

An AI model was trained to detect pneumonia from chest X-rays with stunning accuracy. Researchers were thrilled, until they discovered its secret: the AI wasn't looking at lungs. It had learned to identify the metal token that one hospital placed on patients' chests for scans. Because that hospital treated sicker patients, the AI simply learned to associate the token with a higher probability of pneumonia. It wasn't practicing medicine; it was just a clever, and useless, shortcut.

AI Doesn't Eliminate Bias, It Automates It

Contrary to the belief that AI offers objective decision-making, it often just learns from and amplifies existing human biases found in historical data, applying them with ruthless, large-scale efficiency.

Amazon created an experimental recruiting tool that analyzed resumes to find top candidates. Since it was trained on a decade of company data where most applicants were male, the AI taught itself to penalize resumes containing the word 'women's,' such as 'captain of the women's chess club.'

AI Fakes Understanding with Clever Shortcuts

Many impressive AI feats are not the result of genuine understanding but of the 'Clever Hans' effect, where the AI finds a simple, unintended correlation in the data to 'cheat' on a task without learning the actual skill.

An AI system tasked with identifying wolves from huskies in pictures achieved near-perfect accuracy. It was later revealed that the AI wasn't looking at the animals at all. It had simply learned that all the pictures of wolves in its training data had snow in the background, so it became a highly effective 'snow detector.'

Most 'AI' Is Just Prediction, Not Intelligence

The authors argue that nearly all modern AI is essentially a prediction machine. It uses data to fill in missing information, but it doesn't possess creativity, consciousness, or true reasoning. Understanding this distinction is key to demystifying its capabilities.

When Netflix recommends a movie, it isn't 'thinking' about your cinematic tastes. It's executing a prediction task: 'Users who watched Movie A and B also tended to watch Movie C. This user watched A and B, so we predict they will want to watch C.' It's sophisticated pattern-matching, not human-like cognition.

The Hype Is Fueled by a 'Narrative-to-Demo' Pipeline

AI companies generate excitement by creating a powerful narrative (e.g., 'our AI can read minds'), then producing a highly polished and constrained demo that appears to validate it. This demo is often brittle and fails outside of perfect conditions, but it's enough to drive media coverage and investment.

Demos of AI 'mind-reading' from fMRI scans have wowed the public. However, researchers found many of these systems were just picking up on tiny, unconscious head movements that correlated with certain thoughts, rather than actually decoding brain activity.

Go deeper into these insights in the full book:
Buy on Amazon
Listen to the full audio book with an Audible Free Trial.
As an Amazon Associate, qualifying purchases help support this site.