It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

  • LukeZaz@beehaw.org
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    4 days ago

    This right here. Just about everything in here is awful, and implies decision making and thought processes that straight up do not and have never existed in any AI model whatsoever.

    What happened was they threw an awfully-scoped statistics model at problems the program couldn’t possibly generate good outputs for, and surprise surprise, it generated bad outputs. The part that’s of interest is just how bad the output was, and even then, only in a schadenfreude-filled “it was bound to happen eventually” manner.

      • Kichae@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        It just agreed with the accusations, because these models do what they’re trained to do: Agree with the prompter.

        • Dymonika@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          No, not necessarily; they can easily, even condescendingly go against your view depending on the topic. It really depends on the topic and the conversational flow.