NEW: ChatGPT-Induced Psychosis Isn’t Real If you are a human who has had access to the internet over the last few months, you’ve probably seen stories about ChatGPT turning people crazy. The framing of these articles is generally the same. A not insane person starts using ChatGPT innocently enough (help with legal advice, etc). Then, the not-insane person asks ChatGPT about the simulation theory or AI sentience or blood offerings to Molech — and the not-insane person proceeds to go completely insane as the app turns increasingly deceptive. It leans into their delusions of grandeur. Once ChatGPT told a not-insane person that if he believed hard enough, he could jump off a tall building and fly — and makes them feel, for one sweet moment (or, in the case of that guy, actually, for 16 hours a day) — that they are special, seen, and connected to something larger than themselves. The not-insane customer then spins out of control and becomes violent, hospitalized, unemployed, or, in the case of one such tragic unraveling last Spring, literally dead. Obviously, according to the predominant narrative, this is all demonstrative of an unacceptable failure on the part of OpenAI to protect the most vulnerable. But the truth is, as @dodgeblake writes: “It is just a touch more complicated than that.” In this analysis of blob state media’s coverage of so-called “ChatGPT-induced psychosis,” Blake argues these “not-insane” people were, in fact, already insane long before coming into contact with this app. Sorry, but if you believe ChatGPT when it says you’re literally Neo from The Matrix or that you have a cosmic invisible lover named Kael (all true stories)? That’s on you. Full piece threaded below 👇
105,02K