ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

  • non_burglar@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.

    For instance, do we want ICE to have access to data about user behaviour? They might already have that.

    Who decides the bar of acceptable behaviour?

    • hector@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Peter Thiel and his ilk decide acceptable behavior with our politicians and their appointees sadly. Officials will also be given ways to put names that they don’t like in the categories of those that get bad scores too, even if they don’t qualify by their own rules to be in those categories, that is always one of the selling points to the authorities.

    • GameGod@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I’m confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.

      The consequences are already there and have been for like 15 years.