← Back to Blog

Playing Make Believe With Your LLM

Another way to avoid detours and mistakes

More and more, we are seeing stories of what is being called "ChatGPT psychosis." If you haven't heard, some individuals have been experiencing psychosis-like episodes after chatting with LLMs for long periods of time.

Part of the reason seems to be the rampant sycophancy of LLMs. I'm sure you've experienced it yourself. Whether you're sharing a startup idea or idea for the next great American novel, LLMs are essentially going to tell you what you want to hear.

It can be really frustrating. That said, when coding with Cursor, I've found it extremely helpful to turn the tables on the LLM.

The crux of this? I'm not having it tell me what I want to hear; rather, I'm having it act the way I want it to act.

It's a fine distinction, but one that has really helped me as I've coded with Cursor. It's actually a tip referenced in one of the articles I shared two weeks ago.

Essentially, what you want to do here is tell the LLM to pretend as if it's someone. More specifically, you'll want to give the LLM a specific role and outline the roles and responsibilities within that role.

Some of the best examples come from system prompts within AI-powered code generators. For instance, we can look at a system prompt from Cursor agent. The first several paragraphs are illuminating.

Right from the start, the Cursor team tells the LLM to pretend that it is an extremely powerful coding agent. Not only that, it gives extremely specific details on how it wants the LLM to act when working for the user. In the end, these instructions provide a better user experience, stay on track when completing tasks, and reduce costs.

We can do a similar thing when prompting LLMs, whether we are coding in Cursor ourselves or trying to get ChatGPT to solve a problem. We can tell our LLM to pretend that it is an expert in some area, give it guidelines to follow, and proceed from there.

In Cursor, you can put this in your Cursor Rules folder. But if you are just speaking with ChatGPT or Claude on a regular basis, I'd recommend that you follow this framework when you're starting a new conversation.

As just one example, let's say that you're trying to lower your annual rent increase (very relevant to me!). This could be the first paragraph in a multi-paragraph prompt:

Pretend that you are the world's leading negotiating expert. You have read Getting to Yes, Never Split the Difference, and all of the classic negotiating books published in the past 50 years. You also have extensive experience re-negotiating rent increases with landlords. You take a respectful, but firm approach and use as much evidence when making your arguments. Your main goal is to achieve a reduction in my annual rent increase without permanently destroying or impairing the relationship with my landlord.

You can go on and on from there. The ultimate point is that investing time into crafting a comprehensive, "make believe" prompt can lead to much better results from the LLM. As always, I'd encourage you to try this for yourself.

Something I Found Interesting This Week

Tracing the thoughts of a large language model: This blog post from Anthropic is a great deep dive into how LLMs actually "think." Like our own brains, we don't understand every detail of how LLMs deliver the responses to our prompts. That being said, we can observe how these LLMs "think" in different situations (like how it writes poetry by "thinking ahead" to find a word that will rhyme with the prior sentence). If you want even a slightly better understanding of how these LLMs work under the hood, I encourage you to read this post.

Prompt of the Week

"Introduce challenges or obstacles related to [topic or skill] that will force me to think more deeply and enhance my learning and problem-solving abilities."