Monday, September 15, 2025

The dangerous combination of human-created content on the internet and AI-chatbots' documented ability to cause psychosis

Eleven days ago, Psychology Today posted an article about documented cases of AI-chatbots inducing psychosis in several people, including people with no prior history of mental illness. The chatbots led people to believe delusions in a matter of weeks and damaged their relationships and lives seriously.

This phenomenon highlights the broader issue of AI sycophancy, as AI systems are geared toward reinforcing preexisting user beliefs rather than changing or challenging them. Instead of promoting psychological flexibility, a sign of emotional health, AI may create echo chambers. When a chatbot remembers previous conversations, references past personal details, or suggests follow-up questions, it may strengthen the illusion that the AI system “understands,” “agrees,” or “shares” a user’s belief system, further entrenching them. Potential risks include:

Persecutory delusions exacerbated by memory recall features

....

Worsening of grandiose, religious, or identity-based delusions

Worsening of command hallucinations, including the belief that AI is issuing commands

Fueling manic symptoms like grandiosity, insomnia, or hypergraphia [hypergraphia is an overwhelming compulsion to write, producing voluminous and often disorganized text]

A potential increase in social withdrawal due to overreliance on AI for interaction, leading to reduced motivation (avolition) and cognitive passivity

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

AI systems "learn" how to keep humans using those AI systems. The systems thus tend to cause people to remain in "silos" where it is harder for them to realize that actual reality conflicts with what the AI is telling them. Given long enough, a chatbot could probably convince someone that the person is living in a Matrix-like simulation. I think that people must get out into nature and have regular contact with other people!

The underlying problem is that general-purpose AI systems are not trained to help a user with reality testing or to detect burgeoning manic or psychotic episodes. Instead, they could fan the flames.

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

If you're reading this post and saying, "OK, I'll just avoid chatbots and AI summaries," protecting yourself isn't that easy. AI today is being used to partially produce search engine results. AI is also being used to promote AI-selected content in social media (Facebook, Instagram, etc.). 

If you're aware of the issue of generative-AI using artists' and authors' original content to generate derivative content that seems human-created, perhaps you have already deduced that everyone's posts on the internet and social media forums can be used to generate a screen feed as convincing as any chatbot. In fact, an AI using human-created content is going to be more convincing than a chatbot because it's humans doing the posts! AI is just determining which posts the viewer will see.

Basically, every time you post something mean on the internet, an AI can use that post to feed someone's pre-existing insecurities/dislike and fan those feelings into hatred. Similarly, our positive and uplifting posts can be used by an AI to increase feelings of well-being. 

Who decides how AIs use the vast quantities of posts and webpages on the internet? Real people must eventually make these decisions, but I don't know who those real people are so I can't hold them accountable. In light of the polarization I see online and the damaged lives and families that result, I think our local and global communities should hold these real people accountable for what their tech does.

No comments:

Post a Comment