A ChatGPT warning ignored before deadly school shooting, families sue OpenAI

Seven families have filed lawsuits against OpenAI, alleging the company failed to warn Canadian authorities about a school shooting risk despite identifying troubling activity on its chatbot months before the attack, Qazinform News Agency correspondent reports, citing Futurism.

OpenAI, Chat GPT, lawsuit, court, justice
Cover credit: Canva / Qazinform

The cases stem from a February massacre in Tumbler Ridge, a rural town in British Columbia, where 18-year-old Jesse Van Rootselaar killed her mother and younger stepbrother before opening fire at a secondary school. Five students aged 12 to 13 and a teacher were killed, while 27 others were injured. The attacker later died by suicide.

According to the lawsuits, OpenAI flagged Van Rootselaar’s account in June 2025 after she engaged in graphic discussions about mass violence on ChatGPT. Company safety staff reportedly viewed the conversations as a credible and imminent threat and urged leadership to notify law enforcement. After internal discussions, executives chose not to report the case and instead deactivated the account.

The plaintiffs argue that this decision prevented authorities from intervening before the attack. Filed in California, the lawsuits describe ChatGPT as a “co-conspirator” and claim the company prioritized business concerns over public safety.

Among the plaintiffs are the families of the six victims killed at the school, as well as the family of a 12-year-old girl who survived with severe brain injuries and remains in critical condition.

The lawsuits also challenge OpenAI’s public statements about its safety measures. While the company has described the account shutdown as a ban, the filings allege the attacker was able to create a new account shortly afterward by following instructions provided by OpenAI’s own customer support. These instructions reportedly included guidance on registering a new account using a different email address.

In a public letter issued in April, OpenAI chief executive Sam Altman apologized to the Tumbler Ridge community for not alerting authorities.

The case highlights broader questions about how AI companies handle potential threats. The industry currently lacks clear standards for when platforms should report dangerous behavior to law enforcement.

OpenAI is already under scrutiny over separate incidents, including a 2025 shooting at Florida State University in which the attacker reportedly used ChatGPT in the lead-up to the violence. The company also faces multiple lawsuits alleging its chatbot contributed to psychological harm in some users.

Earlier, Qazinform News Agency reported that a lawsuit claimed ChatGPT linked to the Florida State University shooting.

Most popular
See All