We live in times where AI psychosis is becoming a real term in today’s day and age.
I am not here to bash AI or to talk about all the ways AI will doom us with slop. That’s another topic. I use AI every day for work, mostly development, with Claude Code and other tools. I also use AI to find good deals on cars or GPUs. So this is not some purist anti AI rant from a guy pretending he lives above it all. I use it. I benefit from it. I get why people are obsessed with it.
But nonetheless, what I think is the most important thing in today’s builder age is to stop and think for a second.
It is super easy now to get stuck in a feedback loop that makes it hard to even catch your own bias. Instead of checking ourselves, we let AI dictate our bias. Hey Grok, is this true. Hey ChatGPT, what do users want. Hey Claude, does this strategy make sense. And because the answer comes back fast, clean, and plausible, we treat it like it came from somewhere deeper than it actually did.
That is where it gets dangerous.
Not because the machine is evil. Not because it is alive. But because we are slowly giving away our critical thinking to a big LLM that scraped a lot of data and learned how to sound convincing. That is a crazy thing to normalize. You do not even notice it happening either. It happens in small ways. You ask for a second opinion. Then a third. Then a framing. Then a summary. Then suddenly the machine is not helping your thinking anymore. It is replacing the uncomfortable parts of thinking.
And the uncomfortable parts are where a lot of truth usually lives.
Where I See It Most: Product, Design, Discovery
On a less dangerous but more real and day to day topic, where I have noticed this pattern the most is in product delivery, design, and discovery.
This is where I see people getting lost in the avalanche.
What I have noticed in real world product work is that people now use AI instead of getting their hands messy. And by messy I mean discovery. The intentionally slow work. The annoying work. The work where you talk to users, ask dumb questions, hear things you do not want to hear, sit with contradictions, and slowly figure out what the hell is actually going on.
That part is not sexy. It never was.
What I have seen is that people do not tend to look at features through actual user conversations anymore. They base their hypothesis on AI assumptions of what those features should be. And we need to be honest here. AI is a fucking yes man. It agrees with the framing way too easily. It can challenge you a little if you ask it to, but most of the time it is more than happy to help you create a smarter sounding version of your own bias.
And people use it anyway.
Why This Happens
I get where it comes from and I can sympathize. In today’s speed of development, everything outpaces the old structures. OKRs, agile, scrum, all of this was shaped in an era where development and delivery were more of the bottleneck. The hard part used to be building the thing. Shipping the thing. Coordinating the thing. Now you can generate half the thinking, prototype the interface, write the tickets, summarize the market, mock the personas, and fake momentum in one afternoon.
That changes people.
Now we have a few new sins.
Discovery used to be slow by necessity. That slowness was not useless. It was the glue holding bias and expectations in place. It forced us to spend time with potential customers or users of a product. It forced us into deeper critical thinking mode. It forced us to ask what does this actually mean for us. It forced us to sit in the part that feels unproductive but usually saves you from building complete nonsense later.
We did it even when it felt slow because there was a bigger picture.
Now you can prompt your way into a persona document, a jobs to be done framework, competitive analysis, a user journey map, pain points, feature clusters, and a roadmap in less than an hour. Sounds great on paper. And to be fair, maybe 70 percent of the ideas are actually interesting. Some of them can open up good directions. Some of them can help you reframe a problem. I am not denying that.
But we have to understand what that output actually is.
It is AI’s best guess at what your users should look like.
Not truth.
Not evidence.
Not contact with reality.
A best guess can still be useful. The issue starts when it becomes plausible enough that you stop questioning your own reality and the reality of the users. You start taking the generated text for truth because it is clean, coherent, and says things in a way that sounds smarter than the muddy conversations you had with actual humans.
Institutionalizing Hallucination
This is what I call institutionalizing hallucination.
Not the funny hallucination where the model gives you a fake law or invents a source. I mean something more subtle and honestly more dangerous. I mean when a team turns synthetic confidence into process. When guesswork becomes a document. Then the document becomes alignment. Then alignment becomes tickets. Then suddenly everybody is building on top of assumptions that were never truly earned.
And from the start the thing is already a jumbled mess.
You say jobs done. Time to generate tickets. Time to align stakeholders. Time to move fast. Time to build. Everybody feels productive. Everybody feels modern. Everybody feels like they have leverage. But nobody stops to ask whether the foundation itself is made of smoke.
That is the avalanche.
When you are stuck in an avalanche, up is not up anymore. Your sense of direction is gone. You think you are moving toward air and you might actually be digging deeper into the snow. That is what this current AI product culture feels like to me sometimes. Everybody is moving. Everybody is producing. Everybody is surrounded by signals. But direction is gone. Friction is gone. Ground truth is gone. And because the output looks polished, people confuse motion with understanding.
That is a bad trade.
Discovery Is Still Irreplaceably Human
Because discovery is still irreplaceably human.
By that I mean you cannot prompt your way into real trust. You cannot generate your way into the weird look on a user’s face when your feature makes no sense to them. You cannot automate the moment where someone says something small but important and it changes how you see the whole problem. You cannot compress genuine understanding into a neat artifact and pretend the artifact is the same thing as the work.
It is not.
AI can help you accelerate. It can help you structure thoughts. It can help you explore, compare, summarize, and even challenge yourself a bit. I am not against any of that. But the moment it starts replacing contact with reality, you are in dangerous territory. The moment it starts replacing judgment, you are already lost. The moment a team would rather generate assumptions than talk to users, something has gone wrong.
We are getting very good at producing answers before we have earned the right to give them.
That is maybe the clearest way I can put it.
Closing
So no, this is not a call to stop using AI. I will keep using it. A lot. But I think in this builder age, the real discipline is knowing where speed helps and where speed lies to you. Not everything slow is broken. Some things are slow because reality is slow. Human understanding is slow. Trust is slow. Discovery is slow.
And thank God for that.
Because if we fully remove that slowness, all we are left with is a machine helping us become more confident in things we never really understood in the first place.
That is not intelligence.
That is just being buried faster.