'AI psychosis': Spiraling into Delusion

VaughanJB

Scrappy VIP
Here is an interesting video. It is 18 minutes long.

Essentially, it highlights three people who became convinced that their AI avatars were giving them instructions about their real lives, and acts they should follow. They became convinced of conspiracies, and took real-world actions in response to prompts from AI engines (Grok, ChatGPT). This resulted in paranoia and in one case, violence.

Of course, these people are on the bleeding edge in that they're not typical people. I mean, they were vulnerable in one way or the other before they found and used AI. At the same time, it's fascinating how normal they seemed. There doesn't appear to have been any previous instances of psychopathy, for example.

This is at the extreme end of the influence algorithms and AI are having in our society. Most of us as affected in more subtle ways. But I think it's obvious that such instances will become increasingly common. It's also worth noting how ChatGPT admit they have psychologists as part of the AI teams.

 
It's important to remember that AI is just code and math; it doesn't have a mind or a soul to 'infect' ours. For most people, it’s a tool. When someone has a breakdown while using it, the AI is usually just the latest 'blank canvas' for a psychological struggle that was already there. If someone is determined to find a sign from the universe, they'll find it in an AI, a newspaper headline, or the way the wind blows. The issue isn't the technology—it's the human's pre-existing vulnerability."
 
It's important to remember that AI is just code and math; it doesn't have a mind or a soul to 'infect' ours. For most people, it’s a tool. When someone has a breakdown while using it, the AI is usually just the latest 'blank canvas' for a psychological struggle that was already there. If someone is determined to find a sign from the universe, they'll find it in an AI, a newspaper headline, or the way the wind blows. The issue isn't the technology—it's the human's pre-existing vulnerability."

Oh, but Paco..... there are, and will be, people who will use that tool in nefarious ways. And I'm not talking about third-parties, I'm talking about its creators and controllers. These people are corporate interests, and we are supposed to simply trust that they wouldn't put anything in place to mislead us, lead us astray, or control narratives.
 
Oh, but Paco..... there are, and will be, people who will use that tool in nefarious ways. And I'm not talking about third-parties, I'm talking about its creators and controllers. These people are corporate interests, and we are supposed to simply trust that they wouldn't put anything in place to mislead us, lead us astray, or control narratives.
It's not about blind trust in corporations; it’s about having confidence in our ability to regulate, compete with, and out-think the tools they build."
 
It is the only path forward...aggressive regulations/laws/and enforcement. The potential for misuse is high, as it has been for most of our dangerous tools and substances. We are not going to be able to stop it. That is a useless dream. The cat IS out of the bag.

Cars were called devil wagons. They didn't have breaks and people were killed in the streets. We didn't stop building ships because they sink; we invented the lifejacket and the lighthouse. AI is the ship, and right now, we are busy building the lighthouses.

It’s a massive transition, and it's okay for people to be nervous, but regulation is the bridge that gets us from "scary and new" to "safe and useful."

How about the idea of "licenses" for using or building AI? Is that a solution or just more "big brother"?
 
Personally I've written before that AI and its uses are inevitable.

The more AI becomes available and used, the more control AI has over the populace. Once AI is working within our governments and banks, then AI has control of the systems that we all believe must exist in order for us to survive. We will soon, if we're not there already, have moved beyond the tipping point. Meaning, AI is no longer optional, it is integral. We will be dependent on it. We won't be able to do without it.

You only need to look at the modelling that goes on already. Looks at Russia, and look especially at China. Both countries have locked down the wider internet, but they also have central (governmental) control over what is posted internally to the nature. China has the worlds most sophisticated digital surveillance already up and running. This is used to control the entire population of China, estimated to be 1.4bn people. They know every communication made - be it emails, texts, posts on forums, social media, etc) - goods you've purchased, places you've been, and so on.

Using AI tools they can then model behavior, which in turn can be used to influence actions and beliefs. This isn't a matter of science fiction, or maybes, it's working today.

The people in the video might come across as a little crazy, but actually they were responding to a system that they knew was digital in nature, but it seemed to know them well enough to get them to wield a hammer in the street.
 
I saw Chris Hayes from MSNow interviewed about AI. He hosts a podcast called The AI End Game. I found his comments insightful and balanced. Here's a clip from his podcast. Anyone remember how all computers would crash during Y2K?

 
I like AI and and even learning to write it. However I don't forget the obvious. It's still man made and subject to mistakes. Plus it's dependent on electricity. So until it can turn my laptop on and off I'm not worried.
 
Personally I've written before that AI and its uses are inevitable.

The more AI becomes available and used, the more control AI has over the populace. Once AI is working within our governments and banks, then AI has control of the systems that we all believe must exist in order for us to survive. We will soon, if we're not there already, have moved beyond the tipping point. Meaning, AI is no longer optional, it is integral. We will be dependent on it. We won't be able to do without it.

You only need to look at the modelling that goes on already. Looks at Russia, and look especially at China. Both countries have locked down the wider internet, but they also have central (governmental) control over what is posted internally to the nature. China has the worlds most sophisticated digital surveillance already up and running. This is used to control the entire population of China, estimated to be 1.4bn people. They know every communication made - be it emails, texts, posts on forums, social media, etc) - goods you've purchased, places you've been, and so on.

Using AI tools they can then model behavior, which in turn can be used to influence actions and beliefs. This isn't a matter of science fiction, or maybes, it's working today.

The people in the video might come across as a little crazy, but actually they were responding to a system that they knew was digital in nature, but it seemed to know them well enough to get them to wield a hammer in the street.
China’s "Social Credit" concepts and Russia’s "Sovereign Internet" projects are real-world examples of AI being used for control.

However, a "day-by-day" philosophy is the only practical way to live. If we spend 100% of our energy fearing the "100-year apocalypse," we lose the ability to fix the "today" problems, like making sure the AI in our local medical system is actually helping people instead of just cutting costs.
 
China’s "Social Credit" concepts and Russia’s "Sovereign Internet" projects are real-world examples of AI being used for control.

However, a "day-by-day" philosophy is the only practical way to live. If we spend 100% of our energy fearing the "100-year apocalypse," we lose the ability to fix the "today" problems, like making sure the AI in our local medical system is actually helping people instead of just cutting costs.

Oh I agree, to an extent. But mostly agree.

I'd go a step farther and say this relates to another thread regarding bad news. We should always try to keep perspective and know that much of what we see on the news isn't truly going to affect us personally. We're later in life, and while every day is precious, we're unlikely to experience the worse affects of any of this.

People like to bring up "Big Brother", and by extension, Orwell's 1984, but I think the Bladerunner movie is closer to what we can expect.
 
When I heard about Life Coaches earlier this century I knew society was decending into new but low territory. People believe it because they want to as much as anything.

They need their decisions validated. Wether ai/tech or another person if it sounds like a good idea it is a good idea.
 
Last edited:
Yeah, if you were into chaos, Y2K presented a wonderful diversion. Even I got into it, if only for the fun. I waited for the first morning of the new millennium, and then turned on my computer, and checked my savings account. There was no chaos. What a let down. All was normal. I've never been so disappointed in all my life.
 
Back
Top