Why It’s So Easy to Overestimate How Much You Know About a Topic

Based on years and years of writing about science and studying psychology, here is my Grand Unified Theory for Why It’s So Easy to Overestimate Our Knowledge of a Topic. Let’s say you want to learn about psychology/architecture/physiology. First, that’s easy! We’re all lifelong students of behavior/buildings/bodies. So you’re starting here:

Then you start reading and learning. Maybe you’re reading lots of blogs. Some articles. Books. And after all of this, you’re starting to see the same research studies and term being used and repeated, over and over. You know about the Asch studies on conformity. The Stanford prison experiment. The marshmallow test. You’ve seen a lot of the same cognitive biases, over and over. Over and over. OR: Maybe you’ve been consuming a lot of blogs/articles/podcasts on this topic anyways, just because the things you’re interested in bring up this sort of thing all the time.

So that’s where a lot of well-read amateurs are. This is the Dunning-Kruger effect: “an illusory superiority that comes from the inability of people to recognize their lack of ability.”

You don’t know what you don’t know. It’s easy for a well-read amateur, a weekend warrior, to be unaware of their blindspots for many reasons: first, most people compare themselves to people who know less than they do. This is called a downward social comparison; it’s a great way to make yourself feel better, but also prevents you from having a more objective idea of how much you know. If you’re getting by just fine with the information you have—interjecting with “there’s actually a name for that cognitive bias” at parties—then there’s little incentive to learn more. There’s also little incentive to realize the value in learning more.

So, by this point, you feel very smart and well-read.


When my first book came out, I heard a lot of people say “so, I’ll see you on TED/Oprah soon!” In fact, hundreds of thousands of books are published a year, and maybe a dozen get the TED/Oprah treatment. But if you just have your attention focused on those outlets—and eventually see the same authors and topics pop up—you might start to think that you have a good feel for how things work in publishing. Or, if you repeatedly hear Taylor Swift and Billie Eilish, you might think this is all that pop music is. Social science studies are usually just cultural biases presented with an air of objectivity—stories that sound good, and reinforce the prevailing wisdom—and they spread to the public in the same way as music or books.

It starts with the creation of a study. We’ll use wine to symbolize a study, because anytime that a study makes it through the process of being peer-reviewed is always a good time to test one’s tolerance for booze:

On its own, a single study will get published in a journal, maybe an A-list journal like the Journal of Personality and Social Psychology, the Journal of Experimental Social Psychology, the Journal of Health Psychology, or Psychological Science. If a study is lively or related to something in the news, psychology writers might help spread it out into the universe into a slightly nerdy technical website or publication, like so:

Depending on what journalists and editors see and find interesting, the media attention might stop there. But what’s in the glass is always easier to read than what’s in the bottle, so it might spread a little more. Or it might keep going, getting spread out to more places. Eventually, you start getting a “superstar” effect, where it’s everywhere. This is the marshmallow study, your Stanford prison experiment, your Asch conformity studies: they’re the Taylor Swift of psychology experiments. They’re like memes: you can’t get away.

Because you know so many of these and keep seeing them everywhere, you feel fine filling in your square in like this:

If you’re just looking in the wine glasses (that is, if you’re not reading every single journal), you’re missing out on way more than you know. What don’t you see? All of the scientific knowledge, studies, and information that was never written up at all:

Some of this wasn’t published as a study: researchers collect tons of unused data. Labs, funding, timelines, deadlines—all of these things are immensely complicated, and lots of information gets lost in the process. Perhaps a researcher tried an experiment several times, but didn’t get a statistically significant result. There currently isn’t anywhere to publish information that would ultimately be helpful: “we did this, but nothing happened. Let this be a lesson to other labs, it’s a waste of time!” (As Yuen Yiu writes, “How long would it have taken Edison to invent the lightbulb if he and his team of workers hadn’t keep track of all the failures?”)

Sometimes a study gets published in an obscure journal or doesn’t get picked up. Sometimes there’s just too much damn information to write about. Things happen. We know from the world of music that there’s no direct correlation between popularity and quality. But it’s tempting to say “if it was really good, someone must have written about it!” This is called the inherence heuristic. We see patterns, and want to believe that there’s something underlying and essential explaining those patterns—when in fact it could just be a random or external explanation:

For example, we see girls wear pink, and we want to think that pink is fundamentally a feminine color—but the “Hints on home dress-making” section of the November 1890 issue of the Ladies’ Home Journal advised “blue for girls and pink for boys, when a color is wished.” (How the colors came to be reversed is a long story involving marketing.)

The point? The idea that “this information can’t possibly be true/useful—I haven’t seen it anywhere else” is pretty widespread when it comes to psychology. Like any other aspect of life, a lot of what ends up getting published comes down to money and ego. (More later!)


No, you don’t. Here’s a visual primer on Why people with PhDs often think they know everything, when in fact they know nothing.