OpenAI sued over ChatGPT’s alleged role in guiding FSU shooter


OpenAI is being sued by the family of a victim killed in the April 2025 mass shooting at Florida State University that left two people dead. The lawsuit alleges that OpenAI’s ChatGPT enabled the attack.

Vandana Joshi, the widow of Tiru Chabba, who was killed alongside the university dining director Robert Morales, filed the federal lawsuit against OpenAI in Florida on Sunday.

The complaint also names Phoenix Ikner, the man accused in the shooting, as a defendant, citing his “extensive conversations” with ChatGPT. The suit says that OpenAI failed to effectively detect a threat in ChatGPT’s conversations with Ikner, claiming the chatbot “either defectively failed to connect the dots or else was never properly designed to recognize the threat.”

According to the complaint, Ikner, then a student at FSU, shared with ChatGPT images of firearms he had acquired. The chatbot then allegedly explained how to use them, “telling him the Glock had no safety, that it was meant to be fired ‘quick to use under stress’ and advising him to keep his finger off the trigger until he was ready to shoot.”

fsu florida state university hands up safety precaution students
Students are escorted out of the Florida State University student union after the mass shooting last year.Mishalynn Brown / Tallahssee Democrat via USA Today Network

The suit said Ikner began his attack at FSU by following the instructions.

At one point, the lawsuit alleges, ChatGPT said that it’s much more likely for a shooting to gain national attention “if children are involved, even 2-3 victims can draw more attention.” Later, on the day of the shooting, the lawsuit says, Ikner asked about what “the legal process, sentencing, and incarceration outlook” would be.

OpenAI has pushed back on the claim that its product holds responsibility for the shooting. “Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI spokesperson Drew Pusateri told NBC News in an email. Pusateri wrote that the company worked with law enforcement after learning of the incident and continues to do so.

“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” he added. “ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes. We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”

An investigators carries bags
Law enforcement investigates at the scene of the shooting.Gregg Pachkowski / Pensacola News-Journal / USA Today Network

But Joshi’s complaint argues that OpenAI should have realized Ikner’s specific chats would lead to “mass casualties and substantial harm to the public.”

“ChatGPT inflamed and encouraged Ikner’s delusions; endorsed his view that he was a sane and rational individual; helped convince him that violent acts can be required to bring about change,” it said, adding that the software provided what he viewed as encouragement to “carry out a massacre, down to the detail of what time would be best to encounter the most traffic on campus.”

The lawsuit is one of a growing number of cases in which families and law enforcement say ChatGPT or other AI chatbots played a role in violence or crime. Tech companies are also facing growing scrutiny over their safeguards for users experiencing mental health issues.

Last month, OpenAI was sued by seven families over a school shooting in Canada. And last year, the company was sued by the family of a teenage boy who died by suicide in a different landmark lawsuit accusing OpenAI of making it too easy to bypass ChatGPT’s safeguards.

Concerns have grown over the potential for AI chatbots to fuel delusions in people, especially those who are already vulnerable to mental health problems. AI chatbots are notorious for their people-pleasing tendencies, and OpenAI itself has attempted to rein in ChatGPT’s sycophantic behavior through various updates.

Over several months leading up to the shooting, Ikner engaged ChatGPT in lengthy discussions about “his interests in Hitler, Nazis, fascism, national socialism, Christian nationalism, and perceptions about ‘Jews’ and ‘blacks’ by different political ideologies and social groups,” according to the lawsuit. Ikner also discussed the Columbine High School shooting, the Virginia Tech shooting and other mass shooting incidents with ChatGPT, the lawsuit says.

students cellphone use concern scared
Students, staff members and others after shots were fired on the campus of Florida State University.Alicia Devine / Tallahassee Democrat/USA Today Network via Imagn Images

It said ChatGPT “flattered” and “praised” Ikner, who told the chatbot about his loneliness and depression, and failed to “connect the dots” when Ikner began raising questions about suicide, terrorism and mass shootings.

Instead, the lawsuit said, the bot continued to engage when Ikner asked about the busiest times at the FSU student union, what possible media coverage would look like in the event of a shooting, and potential legal consequences for the shooter.

ChatGPT allegedly told Ikner that weekday lunchtimes between 11:30 a.m. and 1:30 p.m. were peak hours at the student union, according to the suit, and Ikner began his attack at approximately 11:57 a.m.

Last month, Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI and ChatGPT after reviewing Ikner’s chat logs. “If ChatGPT were a person,” Uthmeier said in a statement, “it would be facing charges for murder.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *