I’ve been falling into the same AI trap a lot recently. I will have an idea for an app or a piece of writing, and I will prompt AI to help me develop it. While the result that AI produces is always close to what I had in my head, it is always a bit off. Yet I find it incredibly hard not to incorporate the content into my final outcome.
Whenever I do this, I fall victim to a classic cognitive bias, but in a brand new way.
The anchoring effect is a psychological phenomenon where the first piece of information we see greatly influences our thoughts and opinions of a topic.
This effect is all around us. Every time there is a sale, the anchoring effect is at play, making us think we are getting a good deal even if the list price is inflated. It’s the reason we see so many prices ending in .99, the reason an IMDb rating can affect our perception of a movie, and the reason first impressions are so hard to overcome.
The first piece of information we receive creates an anchoring frame, and we try to place all subsequent information around that frame. This cognitive bias colors a lot of how we think about the world. It has been helpful to us as humans throughout the ages, as there isn’t enough time to form our own opinions of everything, and as social creatures, we rely on the opinions of people we trust to influence our own opinions.
But in the AI age, when we are talking to bots that produce opinions faster than we can consume them, this coloring becomes dangerous.
I call this the AI Anchoring Effect: when we let AI-generated content shape our thinking before we’ve fully developed our own thoughts.
Our ideas always seem so clear in our own minds, until we actually begin to articulate them. The reality is that until our ideas are concrete on paper, they are more malleable than we think.
Instead of fleshing out our ideas ourselves, it’s gotten too easy to turn to AI, giving it the gist of what we’re thinking, and receiving paragraphs of polished content that appear authoritative and “close enough” to what we were thinking. It seems like we are prompting AI to help us think deeper about something, and we assume the output is exactly what we meant, but in reality, it’s impossible to see a written piece and not have it anchor us.
The creative process shifts from actively forging an idea to passively accepting one. Instead of being the author of our own ideas, we are forced into the role of “editor” of AI’s interpretation of our ideas. And as we edit, we stare at a shell of our original idea, recognizing it’s not quite right, but not knowing how to fix it.
The process of editing made sense when another person wrote the content, because we knew every word and sentence was a painstaking process of choice. Editing was a collaborative effort, resulting in new ideas, new understanding, and a thoughtful output. Editing AI content is much less fruitful, because there wasn’t thought given to each word, sentence, and paragraph.
There’s a reason we call it “AI-generated,” not “AI-written,” as AI has not gone through the process of taking a raw idea and molding it into words. It is procedurally generated, templated, and easily repeatable. The writing appears polished and eloquent, yet feels surprisingly hollow.
“Collaboration” with AI is a misnomer, as it is incredibly one-sided. AI produces paragraph after paragraph of content, with every output feeling like a manifestation of: “I would have written a shorter letter, but I did not have the time”.
It ends its long responses with friendly questions like “do you want me to suggest more ideas for this?”, which is a tempting morsel. It might as well have asked “do you want me to do the hard part of your work so you don’t have to?” We don’t want to be anchored, yet creating is hard, our minds are tired, and the thought of delegating deep thought is appealing.
While the goal was to use AI as a collaborative tool to help us articulate our idea, the “collaborative process” leaves us stuck. We have AI-generated content that looks professional and feels close, but not exactly how we originally wanted to phrase it. We still have the original idea in our head, but it is greatly colored by what is already on paper. In other words, we are anchored.
In this moment, we have a few choices. We can say: “that’s close enough” and move on to the next thing. We can say “that is not right, write me another” and get a brand new, equally not-quite-right version of our idea. Or we can scrap the AI anchor entirely, sit down, and go through the process of actually articulating our thoughts before prompting.
AI tries to make us perfect, but it does a mediocre job at perfection. The reality is that expressing an idea is hard, and expressing it perfectly is impossible.
Since I began consistently writing in Jab’s Lab this year, I have realized an idea only becomes real when it is down on paper. I have plenty of ideas for articles floating around my mind, which all appear abundantly clear to me. Yet, whenever I sit down to write one, it is challenging. My idea needs concretizing, which only happens through the process of writing, experimenting with words, playing with phrasing, and rearranging the order of the ideas until they’re exactly what I imagined in my head.
There is tremendous value in this creative process. While there may be a place for AI, it should not be in the first draft.
My new creative philosophy is “think first, prompt second.”
I hope to leverage AI only after I have painstakingly attempted to articulate my own thoughts. I want to anchor myself to what I actually think, and then be open-minded to new ideas. Even if I end up taking some ideas from AI, at least my thoughts and idea structure will be originally mine.
In the world where 90% of the “thought pieces” on LinkedIn are AI-generated, and AI-hypers on Twitter try to convince us that we aren’t using AI enough, I believe that authentic, well thought out, imperfect yet individualistic ideas will truly stand out.
It’s too easy and tempting to use AI for everything, for the sake of “speed”, or “productivity”, or other metrics that the business world has deemed virtuous. The temptation to give in to these metrics overlooks the virtues that will matter more in the next 5 years: individuality, clarity of thought, storytelling, self-expression. In other words, the things that make us human.
An idea is only as good as our ability to express it. Chasing speed and productivity now will only be to our detriment in the future. Iteratively prompting AI can feel like collaboration, because it consistently gets us closer to our desired written outcome. But really, we’re building layer by layer on AI's idea foundation rather than our own.
As we become more reliant on AI for the first draft, our creativity atrophies, resulting in a less insightful, less interesting world.
So instead of responding "yes” to the next “do you want me to suggest some ideas?” query from our friendly neighborhood chatbot, let’s think first, and prompt second.
Otherwise, AI may be anchoring us.
Until next week,
Cory