It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

  • LukeZaz@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    I think this kind of rhetoric is best saved for when AI is not currently one of the most harmful things in society today. Argue it’s a hammer all you like; people aren’t going to be receptive when that hammer is currently being used to beat their faces in, and making that argument at such a time isn’t exactly sympathetic.

    • t3rmit3@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      I think that “stop being mad the hammer exists, start being mad at the group of people who are beating your face in” is a very important message. Getting rid of AI (which isn’t even something we can do; you can’t put the genie back in the bottle with this) won’t fix the issue, they’ll just make another hammer. The hammer is both a weapon in this case, and a distraction.

      • LukeZaz@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        I think it’s fine if people are mad at both. By all means, encourage people to be angry at the responsible companies. But you don’t gotta defend the tech to do that.

        Besides, as far as I’m concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners. Is it a permanent solution? Obviously not, no — you’re very correct that the groups and people hard-pusing AI are much more important targets for ire. But two pressures are better than one.

        • t3rmit3@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          14 hours ago

          Besides, as far as I’m concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners.

          My worry is that much like gun control legislation, I see our neoliberal fear-based media pushing AI use by individuals as the “real danger”, and will only end up funneling anti-AI sentiment into 1) limiting actual open AI access (e.g. open-weight, FOSS models) by individuals, and 2) legitimizing governmental and corporate use of AI as the only “safe” and “legitimate” AI usage.

          The ratio of “government-controlled AI is literally being used to kill people right now” awareness out there, versus e.g. awareness of deepfakes, is astoundingly unbalanced. Both are real dangers, but only one is getting legislation passed on it, and once again it’s not the one that would put limits on corporations and government.

          Stoking fear is not useful if your opponents are the ones who will actually utilize that fear to their own ends successfully.

          • LukeZaz@beehaw.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            That’s very understandable. While I think we disagree on the utility of AI (since I feel that it is more harmful than it is useful, and am unsure how much that would change post-bubble), I do agree that this is a likely path for the gov’t to take and would leave the most serious things completely unaddressed while also clamping down on some things that shouldn’t be to begin with. Heck, in many regards, you could say the GUARD act is this problem in motion.

            For me, I guess, the bubble and its effects on us are just so ridiculous and exhausting at this point that it’s hard for me to worry about things like this. Though I do vehemently hate government use of AI especially; using it at all is a problem in my mind, but using it specifically to deliberately hurt people is reprehensibly disgusting.