ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.
OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.
The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.
OpenAI banned the account in June 2025 for violating its usage policy.



I think this should piss off a lot of people. Instead of doing something, they opted to do nothing, and now they’re exploiting the tragedy as a PR opportunity. They’re trying to shape their public image as an all-powerful arbiter. Worship the AI, or they will allow death to come to you and your family.
Or perhaps this is all just rage bait, to get us talking about this piece of shit company, to postpone the inevitable bursting of the AI bubble.
Edit: This is a sales pitch from OpenAI to the RCMP, with them saying they’ll sell police forces an intelligence feed. It just comes across as horribly tone deaf and is problematic for so many reasons.
I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.
For instance, do we want ICE to have access to data about user behaviour? They might already have that.
Who decides the bar of acceptable behaviour?
I’m confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.
The consequences are already there and have been for like 15 years.