Florida investigates OpenAI for role ChatGPT may have played in deadly shooting
Florida Attorney General James Uthmeier announced Thursday that he plans to probe OpenAI’s ChatGPT chatbot for potentially playing a role in a mass shooting at Florida State University last year.
Last week, the family of one of two victims in the attack announced it plans to sue OpenAI because the gunman allegedly constantly communicated with ChatGPT in the days leading to the shootings. A lawyer for the victim’s family told a Florida TV station he has “reason to believe that ChatGPT may have advised the shooter how to commit these heinous crimes.”
Uthmeier posted a statement on X announcing his office’s investigation, saying that legislators and AI companies must do more to ensure their products don’t threaten individuals’ safety.
“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Uthmeier said in a videotaped announcement. “AI should exist to supplement, support and advance mankind, not lead to an existential crisis or our ultimate demise.”
Uthmeier said that he plans to issue subpoenas in the coming days.
An OpenAI spokesperson said in a statement that more than 900 million people use ChatGPT each week to learn new skills or get health care advice.
“Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery,” the statement said. “We build ChatGPT to understand people's intent and respond in a safe and appropriate
way, and we continue improving our technology.”
OpenAI will cooperate with the investigation, the statement said.
AI chatbots have allegedly played a role in a number of suicides and murders, according to victims’ families.
Psychologists have said they believe ChatGPT and similar technologies can lead to so-called AI psychosis by amplifying delusions in users who communicate with chatbots about irrational fears or their desire to kill themselves.
In one such case, Connecticut man Stein-Erik Soelberg, who had a history of mental illness, reportedly killed his mother and himself after ChatGPT allegedly egged on his paranoia that he was being surveilled.
“Erik, you’re not crazy,” OpenAI allegedly told Soelberg. “Your instincts are sharp and your vigilance here is fully justified.”
In January, relatives of a Colorado man who took his own life in November claimed that ChatGPT encouraged him to kill himself. The case is one of multiple instances of relatives saying ChatGPT encouraged their loved ones to commit suicide.
Kentucky filed a lawsuit against a different chatbot company, Character.AI, in January for allegedly endangering children, saying in a complaint that chatbots are “dangerous technology that induces users into divulging their most private thoughts and emotions and manipulates them with too frequently dangerous interactions and advice.”
Suzanne Smalley
is a reporter covering digital privacy, surveillance technologies and cybersecurity policy for The Record. She was previously a cybersecurity reporter at CyberScoop. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and two presidential campaign cycles for Newsweek. She lives in Washington with her husband and three children.



