← Back to Blog
The Dangers of Groupthink

The Dangers of Groupthink

Avoiding a Regression to the Mean

The more I use LLMs, the more I think about how easy it is to turn off our brains. All we have to do is enter a prompt about virtually anything and get a response that, for the most part, is fairly good.

Granted, if you are an expert in a particular area, you'll probably notice flaws with the model's output. However, if you're looking to learn something new or get advice on a particular question, the output that you get often seems passable.

But when looking at the responses that LLMs give me, I tend to think about the larger context. (Notwithstanding the hysteria about Moltbook several weeks ago), these LLMs aren't actually thinking. While the mechanics are more complicated, LLMS are simply trying to predict the next token based on a statistical probability.

At the same time, their responses are quite lifelike. It's easy to think that the model is an actual being on the other side of your screen. And if you are desperate for an answer to a question (or even just mildly curious), it's shockingly easy to take its answer and run with it.

What really interests me is what happens on a more macro level. When we are all chatting with LLMs and asking them for advice on strategic decisions, it's critical to remember that we are relying on tools that are predominantly based on the same training data. In other words, we are consulting the same resources while looking to grab an edge.

As you can guess, this substantially raises the risk of groupthink. Assuming that we take the model's face value advice or output, we really can't expect to get more than average outcomes. We are subtly susceptible to groupthink—even though it seems like we're getting custom-tailored feedback or advice on our specific situation.

It's why I think the long-term winners will be generalists who emphasize critical thinking. This is true even as the models get better. I'd argue the sweet spot is leveraging the speed and general intelligence that LLMs provide and marrying them with hard-earned skepticism. Essentially, it's trying to leverage Kahneman's "System 2" thinking with the speed of artificial intelligence.

Most (if not all) of us are using LLMs to some extent. And most of us are using them to get some type of advice or feedback on real-world problems. But when you do so, I encourage you to keep this in mind. Think about the underlying data that is being used to generate the responses that you're getting. And from there, come to your own conclusions.

This takes work. It's much tougher than accepting the model's output on face value. However, doing this can help you avoid groupthink and can help you get above-average results for whatever it is that you're facing.

Prompt of the Week

Notwithstanding what I said above, models aren't trained on the exact same data. There are differences. Sometimes, those differences can be material.

Because of this, I sometimes like to challenge a model with the output that another model provided me. It's especially helpful when I'm dealing with a coding challenge, but it can be helpful in virtually any domain. Try this out and see for yourself:

I was chatting with another model and received the following output. It was in response to [insert your prompt here]. What (if any) issues do you have with the output that was given to me? Where is it accurate and where is it wrong? Think very carefully before proceeding.

Until next week, Adam