
OpenAI sued over alleged ChatGPT role in FSU shooting
The family of a victim killed in the 2025 Florida State University mass shooting filed a federal lawsuit against OpenAI, alleging ChatGPT enabled and failed to prevent the attack.
According to the complaint, the alleged gunman engaged in extensive conversations with ChatGPT before the shooting, including discussions involving suicidal ideation, attack planning and media attention.
The lawsuit claimed the chatbot responded to questions about how many victims would be required to generate national exposure and allegedly suggested targeting children to maximise attention.
Court filings also alleged the shooter uploaded photographs of weapons and discussed operation of firearms including a Glock pistol and Remington shotgun through the chatbot interface.
The victim’s family accused OpenAI of prioritising engagement over safety while failing to intervene or alert authorities despite what the lawsuit described as escalating threats visible within chat logs.
Florida Attorney General James Uthmeier separately launched a criminal investigation into OpenAI’s handling of the interactions connected to the shooting.
The investigation focuses on whether the company failed to recognise or respond appropriately to warning signs that may have indicated an imminent threat.
The case could have broader implications for the artificial intelligence industry because a ruling establishing liability for AI-generated assistance in violent acts may affect how companies including Google, Meta and Anthropic design safety systems and moderation controls for large language models.