The Metric Is Not the User
How Goodhart's Law loses the user in the metric

TL;DR:
When a metric becomes a target, it stops measuring what it was meant to. Teams optimize the number and lose the person behind it. Facebook's own researchers documented this in 2018 and were overruled. The engagement chart went up. The users got worse.
At some point your team decided that a metric would tell you whether the product was working. Maybe it was daily active users. Maybe it was session length, or retention, or NPS, or engagement rate. It was a reasonable choice. The metric correlated with something real: people finding value, coming back, telling friends. You set it as the target. The team started working toward it. And somewhere in that process, without anyone noticing, you stopped designing for people and started designing for the number.
That is not a dramatic failure. It happens without announcement, in sprint reviews and roadmap prioritizations and A/B test readouts. The metric looks healthy. The users are not.
The number stops meaning what it meant
There is a principle in economics and social science that describes this exactly. When a measurement becomes a target, it stops being a good measurement. The economist Charles Goodhart identified this in 1975 in the context of monetary policy, but it applies to any system where people are rewarded for hitting a number. They hit the number. The underlying reality the number was supposed to track drifts away from it. The correlation breaks.
The social scientist Donald Campbell put it harder. He wrote that
“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures.”
Donald T. Campbell, 1979
Campbell was writing about education policy and government programs, but the mechanism is identical in product teams. The pressure does not have to be malicious. Nobody needs to be cheating. The corruption happens through normal, well-intentioned behavior. Designers optimize screens for clicks because clicks are what gets measured. Engineers ship features that inflate session length because session length is on the dashboard. Product managers deprioritize work that helps users but doesn’t move the metric because the metric is how they get judged. Everyone is doing their job. The team is removing the user from the equation, one sprint at a time.
In his book The Tyranny of Metrics, Jerry Muller documents how this plays out across industries: hospitals gaming readmission rates, schools narrowing curriculum to what gets tested, police departments manipulating crime statistics. The pattern is always the same: the proxy replaces the thing it was supposed to represent, and the people responsible often don’t notice until the gap between the number and the reality becomes impossible to ignore.
What Facebook’s own researchers found
In 2018, Facebook’s internal research team produced a presentation that warned the company’s News Feed algorithm was exploiting what they called the human brain’s attraction to divisiveness. Facebook optimized the algorithm for engagement: reactions, comments, reshares. That was the metric. It worked. Engagement went up. So did outrage-driven content, because outrage drives reactions. So did misinformation, because misinformation spreads through reshares. The internal team proposed changes. Executives shelved the proposals, because implementing them would mean taking a hit on the engagement numbers.
This came out in 2021 when Frances Haugen, a former Facebook product manager, shared internal documents with the Wall Street Journal as part of the investigation known as the Facebook Files. The documents showed that Facebook’s own researchers had understood the problem, had documented it in internal memos, and had watched the company choose the metric over the user. Not out of ignorance. Out of the logic that the metric was the goal.
The researchers were not naive. They knew engagement and value were not the same thing. But the org had built its decisions around engagement, and changing that would mean questioning the framework that the whole business ran on. Goodhart’s Law does not just corrupt the number. It corrupts the judgment of the people watching the number, because they have spent months or years treating that number as evidence that the product is working.
Ask what the metric rewards that you didn’t intend
The problem is not that metrics are bad. You need something to measure. The problem is treating the metric as the user, rather than as one imperfect signal about the user. The moment a number becomes a target, it creates incentives for behavior that hits the number without serving the underlying goal. You need to find those incentives before your team does.
There is a check worth doing before any metric goes on a dashboard as a target. Ask: what is the worst way someone could hit this number? What behavior could maximize this metric while making the product worse for users in ways that are obvious to anyone not watching the dashboard? If you can answer that question, you have found the corruption that will happen. Not because your team is bad. Because that is how metrics work.
Call it the perverse incentive check. Run it out loud, in the room, with the people who will be working toward that number. If a designer could boost DAU by adding a notification that nags users into opening the app whether or not they wanted to, that is a perverse incentive. If an engineer could inflate session length by slowing down a task that users are trying to complete, that is a perverse incentive. The point is not to stop measuring. The point is to see the gap between the number and the person before the team falls into it.
The signal is not the source
Metrics drift from reality fastest when teams stop talking to users. When the number is going up, there is less pressure to ask whether the experience is actually good. The dashboard provides a kind of false confidence: things look fine, therefore things are fine. Meanwhile the users are adapting. They are doing workarounds. They are tolerating friction. They are about to churn. None of that shows up on the engagement chart until it is already too late to be a warning.
The teams that avoid this are not the ones with better metrics. They are the ones that treat metrics as one input alongside direct observation of actual behavior. They run usability sessions even when the numbers look good. They read support tickets when retention is healthy. They ask users to walk them through a task even when completion rates are high. They do this because they know that what the number shows and what the user experiences can diverge for a long time before the divergence becomes measurable.
You stopped designing for people. You started designing for a number.

