Your Users Are Lying to You
Why stated intentions are a weak predictor of actual behavior, and what to do instead

TL;DR:
Users say they want features they'll never use. They claim they'll pay for premium but never upgrade. They're not lying intentionally—they just suck at predicting their own behavior. Watch what they do, not what they say.
The most dangerous thing your users ever told you was the truth. They sat across from you and said they would use it every day. They said they needed exactly this. They said the current solution was broken and yours would fix it. They were not lying. They meant every word. And then they did not do any of it, because meaning something and doing something are not the same problem, and you built a product for the first one.
This is where most design research fails. Not in the execution. In the assumption underneath it. When you ask someone what they want, what they need, what they will do, you get their intentions. Intentions are not behavior. They are a story people tell about who they plan to be. The product you build will meet the person they actually are. Those two people are further apart than your research will ever show you.
The Gap Has a Name and a Size
Psychologists have been studying the distance between what people say and what they do for decades. The finding is consistent across hundreds of studies, thousands of participants, and dozens of behavioral domains: people fail to do the things they intend to do. Paschal Sheeran at the University of North Carolina ran a meta-analysis of ten previous meta-analyses, covering 422 separate studies. He found that across research on exercise, condom use, cancer screening, and other health behaviors, the median proportion of people with positive intentions who still failed to perform the behavior was 47 percent. Nearly half. People who said they would do the thing. People who meant it. Forty-seven percent of them did not. That number comes from health research, but the mechanism is not specific to health. It shows up wherever intentions are measured and behavior is tracked afterward. Sheeran and Webb, writing in 2016, put it plainly in the abstract of their paper: “Bitter personal experience and meta-analysis converge on the conclusion that people do not always do the things that they intend to do.” They also found, reviewing experimental studies that tried to change behavior by changing intentions, that a medium-to-large change in what people intended produced only a small-to-medium change in what people did. Think about what that means for a product built on user research. You change the intention. You do not change the behavior. The reason this happens is not mysterious. Intentions are formed in calm conditions: a quiet room, a future-oriented mindset, no friction, no competing demands. Behavior happens in the real world, where someone is tired, distracted, running late, choosing between your product and six other things demanding attention. The person in your interview is not the person who opens your app at 10pm after a long day. Same name. Same intentions. Different person in the moment that matters.
Why Research Makes This Worse
There is a second mechanism layered on top of the first. When people answer questions in a research context, they are not just reporting their preferences. They are also managing how they appear. Social desirability bias is the well-documented tendency for people to give answers that make them look good: more health-conscious, more engaged, more capable, more diligent than they are. It is not deliberate dishonesty. It is the ordinary social pressure of being watched and evaluated, even when the evaluator says the research is anonymous. The person sitting across from you does not want to look like someone who gives up on new habits after a week. So they do not tell you they will. They tell you the version of themselves they are working toward. You are not getting data. You are getting aspiration dressed up as data. Put these two things together, the intention-behavior gap and social desirability, and you have a research method that overestimates how much your users will engage, how often they will return, and how deeply they will adopt the behavior your product is designed around. The bias is not random noise. It points in one direction: your users will tell you they will do more than they do.
Forty Million Fitness Trackers in a Drawer
In 2011, Jawbone launched the UP wristband. The pitch was behavior change: track your steps, track your sleep, understand your patterns, build better habits. The research supported it. People said they wanted to be more active. They said they wanted accountability. They said they wanted a product that would help them see their behavior clearly and change it. Sales were strong in the first year. Endeavour Partners, a mobile strategy firm, published research in 2014 showing that more than half of Americans who had ever owned a modern activity tracker no longer used it. A third had abandoned theirs within six months of buying it. Jawbone stopped producing fitness trackers. The company filed for liquidation in July 2017. The users were not lying when they sat in the research sessions. They wanted the thing they described. But wanting a habit and building one are different problems, and Jawbone built a product for the first problem while the second one killed the company. The product did exactly what the research said users wanted. It tracked their steps. It showed them their sleep. It nudged them toward better behavior. And somewhere around week six, they stopped caring, stopped wearing it, stopped opening the app. The behavior they described in research sessions never arrived. This pattern is not unique to wearables. It shows up in productivity apps downloaded and never opened, in premium tiers nobody upgrades to despite everyone saying they would, in features that took months to build and get used by three percent of the user base. The gap between stated intent and actual behavior is not a product failure. It is the baseline condition you are designing into.
What to Do Instead
The answer is not to stop doing user research. It is to treat what people tell you as a hypothesis about behavior, not a report of it. When a user says they will do something, test whether they will, before you build the product around the assumption. This does not require a full-scale experiment. A prototype, a brief trial, a low-friction commitment that costs the user something real. Signing up for a waitlist is easier than paying. Paying is easier than coming back three days in a row. Watch which ones they do. And when you cannot run a behavioral test, run a friction audit. Before you build for a stated need, spend twenty minutes listing every real-world obstacle between the user’s intention and the behavior your product requires. What is standing between them and doing the thing right now? Time, attention, habit, competing priorities, skill gaps, setup friction. If that list is long, the stated intention is a weak predictor. The longer the list, the less you should trust what you heard in the interview room. Ask different questions too. Instead of asking what someone wants, ask what they have tried before and what happened. Ask what got in the way. Ask what made them stop. Past behavior is not a perfect predictor, but it is a far better one than stated intention. The person who tried something similar last year and quit in three weeks is telling you something no interview about future intent ever can. The research session gives you the aspirational user. You need the actual one.
The Person Who Shows Up
Your user told you they would open the app every morning. They meant it. The problem is that meaning something at 2pm in a research session says almost nothing about what you will do at 7am when the alarm goes off and the routine has not yet formed. Design for the person who shows up. Not the person who intends to.

