• SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    51
    ·
    6 days ago

    The ffmpeg team was mad at Google when they reported a bug that was found and reported automatically with an AI. Google reported the bug without providing a fix and also gave an ultimatum. Google would publicize the bug report after 60 days. That’s what pissed off the ffmpeg devs. Not to mention that it was a very obscure bug, like ffmpeg didn’t decode a video file from a 90’s videogame correctly.

    Anthropic on the other hand found a bug and provided a fix. So why would they be mad if the fix is properly written and fixes the bug ?

      • General_Effort@lemmy.worldOP
        link
        fedilink
        arrow-up
        3
        ·
        6 days ago

        It’s really only a minority, or else the world would not work. Think how the theory of evolution gained mainstream acceptance, despite resistance by fanatics who had support by society,

  • spectrums_coherence@piefed.social
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    5
    ·
    edit-2
    7 days ago

    LLM is very good at programming when there are huge number of guardrails against them. For example, exploit testing is a great usecase because getting a shell is getting a shell.

    They kind of acts as a smarter version of infinite monkey that can try and iterate much more efficiently than human does.

    On the other hand, in tasks that requires creativity, architecture, and projects without guard rail, they tend to do a terrible job, and often yielding solution that is more convoluted than it needs to be or just plain old incorrect.

    I find it is yet another replacement for “pure labor”, where the most unintelligent part of programming, i.e. writing the code, is automated away. While I will still write code from scratch when I am trying to learn, I likely will be able automate some code writing, if I know exactly how to implement it in my head, and I also have access to plenty of testing to gaurentee correctness.

    • Serinus@lemmy.world
      link
      fedilink
      arrow-up
      42
      arrow-down
      3
      ·
      7 days ago

      People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.

        • MinnesotaGoddam@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          7 days ago

          the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.

          the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?

          having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.

      • brianpeiris@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 days ago

        I suspect the problem is that there are many developers nowadays who don’t care about code quality, actual engineering, and maintenance. So the people who are complaining are right to be concerned that there is going to be a ton of slop code produced by AI-bro developers, and the developers who actually care will be left to deal with the aftermath. I’d be very happy if lead developers are prepared to try things with AI, and importantly to throw the output away if it doesn’t meet coding standards. Instead I think even lead developers and CTOs are chasing “productivity” metrics, which just translates to a ton of sloppy code.

        • Serinus@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          7 days ago

          Yeah, I don’t plan to leave in two years, so I’m motivated to not say “oh fuck” when I have to maintain the thing I built later.

          Plus, you know, I don’t want people to groan when they have to work on my code.

    • lonesomeCat@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      The thing is, you know how it is in your head and you need to lay out that entire context.

      And after that you MUST review the code because you’d never know. Wouldn’t call it automation if I have to double check EVERY TIME

      • definitemaybe@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        6 days ago

        It’s great for coding things that you don’t care if it gets it wrong, though. Like, I vibe coded a JavaScript injection to add a client-side accessibility feature to a website running a fairly complex tech stack. I don’t know JavaScript, but I know how to code, and I know enough HTML and CSS to do simple things.

        It failed quite a few times, but each time I just needed to refresh the page for a clean slate, tell the LLM how it fucked up, and try again. In about an hour, I had a functional script I could inject in the site to bolt on a new feature.

        I was reading the code along the way, so I know what it’s doing for the most part (not some of the JavaScript things, like why there are extra brackets in places I wouldn’t expect, but whatever.) It wasn’t doing anything dangerous.

        Not mission critical. A small block of code to do one simple thing. There was no real downside or cost of failure, aside from wasted time. And it’s small enough that it’s easy to understand from scratch; it’ll be fairly easy to update and maintain.

        On the other hand, it sounds like Microslop and NVidia (and many others) are using AI slop in complex, mission-critical projects. I’d be nervous for their future, if I cared about them.

  • Owl@mander.xyz
    link
    fedilink
    arrow-up
    25
    ·
    6 days ago

    So they read them, and the patches were good (according to this message)

    Why hate then?

  • zieg989@programming.dev
    link
    fedilink
    English
    arrow-up
    164
    arrow-down
    4
    ·
    7 days ago

    I would not be surprized if Anthropic would actually hire a real developer to make these PRs as a marketing stunt

    • In 2021, when Amazon launched its first “just walk out” grocery store in the UK in Ealing, west London, this newspaper reported on the cutting-edge technologies that Amazon said made it all possible: facial-recognition cameras, sensors on the shelves and, of course, “artificial intelligence”.
      An employee who worked on the technology said that actual humans – albeit distant and invisible ones, based in India – reviewed about 70% of sales made in the “cashier-less” shops as of mid-2022

      Source: The Guardian

      UK AI company builder.ai has been tricking customers and investors for eight years – selling an advanced code-writing AI that, it turns out, is actually an Indian software farm employing 700 human developers.

      Source: ACS Information Age

      • baguettefish@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        edit-2
        7 days ago

        builder AI was genuine AI, it’s just that the company simultaneously also did contracted development with real humans. journalists got confused.

        there’s a really good youtube documentary i watched which actually got into the tools and software used, but I can’t find it anymore. either way, you can’t dress up humans coding as AI. it’s not fast enough.

    • BestBouclettes@jlai.lu
      link
      fedilink
      arrow-up
      193
      arrow-down
      1
      ·
      7 days ago

      Well, if the model detected an issue, and a human tested it to make sure it was real and then fixed it, I think that’s an acceptable use of AI tools.

  • General_Effort@lemmy.worldOP
    link
    fedilink
    arrow-up
    95
    arrow-down
    2
    ·
    7 days ago

    (In case someone has been living under a rock in the last 48 hours. Anthropic’s new model “Mythos” has been finding a lot of new vulnerabilities. This is about patching one.)

  • CannonFodder@lemmy.world
    link
    fedilink
    arrow-up
    84
    arrow-down
    3
    ·
    7 days ago

    ai tools can detect potential vulnerabilities and suggest fixes. You can still go in by hand and verify the problem carefully apply a fix.

    • shirasho@feddit.online
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      7 days ago

      AI is actually SUPER good at this and is one of the few places I think AI should be used (as one of many tools, ignoring the awful environmental impacts of AI and assuming an on-prem model). AI is also good at detecting code performance issues.

      With that said, all of the fix recommendations should be fixed by hand.

      • _hovi_@lemmy.world
        link
        fedilink
        arrow-up
        11
        arrow-down
        3
        ·
        7 days ago

        Yeah I would add also ignoring how the training data is usually sourced. I agree AI can be useful but it just feels so unethical that I find it hard to justify.

        I’m a big LLM hater atm but once we’re using models that are efficient, local and trained on ethically sourced data I think I could finally feel more comfortable with it all. Can’t be writing code for me though - why would I want the bot to do the fun part?

        • shirasho@feddit.online
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          7 days ago

          Exactly my thought. I got into software development because designing and writing good code is fun. It is almost a game to see how well you can optimize it while keeping it maintainable. Why would I let something else do that for me? I am a software engineer, not a prompt writer.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    7 days ago

    Hold on, wasn’t one of the “features” of the “leaked” Assumed Intelligence source code the “human”-like version?

  • sun_is_ra@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    66
    ·
    7 days ago

    Maybe he meant code quality was so good its like a human wrote it.

    After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That’s racism against machines.

    • Mark with a Z@suppo.fi
      link
      fedilink
      arrow-up
      45
      ·
      7 days ago

      One big reason people outright reject AI generated code is that it shifts the work from author to the reviewer. AI makes it easier to make low effort commits that look good on surface, but are very flawed. So far LLMs don’t match the wisdom of an experienced software dev.

      • bamboo@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 days ago

        This is what happened with FFMpeg when Google was trying the same thing to promote their models. If the code is good, and doesn’t put unnecessary burden on the reviewer, then that’s great. But when the patches are sloppy or the reviews are overwhelming, it doesn’t help the project, it hinders it.

        • Serinus@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          7 days ago

          It’s almost like there should be a human in the loop to guide and review what the ai is doing.

          The thing works a lot better when I give it smaller chunks of work that I know are possible. Works best when I know how to implement it myself and it just saves me from looking up all the syntax.

      • sun_is_ra@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 days ago

        totally agee also same problem with published scientific papers .

        I just assume that since this code submission was done by Anthropic itself - probably to demonstrate how good their AI has became ( I don’t know what is the actual background to this story) - FFmpeg team gave it more consideration as opposed to a random amature.

      • Samsy@lemmy.ml
        link
        fedilink
        arrow-up
        41
        arrow-down
        1
        ·
        7 days ago

        That was rude against my wife-chatbot. Apologize to her, here: https://…

        • BremboTheFourth@piefed.ca
          link
          fedilink
          English
          arrow-up
          19
          ·
          7 days ago

          LLMs will never be people. Computers might be, one day in the very distant future. But literally every piece of the current AI hype train is just hype. LLMs could, maybe, at best, be a single piece of a much larger puzzle for bringing consciousness into being. But the “Just Add More Compute Bro!” mantra is just tech bros doing their market hype thing. It has as much chance of giving rise to consciousness as my PC has whenever I add another hard drive.

          • obelisk_complex@piefed.ca
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            10
            ·
            edit-2
            7 days ago

            LLMs will never be people

            Boy oh boy, you’re not gonna like this one bit: https://www.npr.org/2014/07/28/335288388/when-did-companies-become-people-excavating-the-legal-evolution

            (To be clear, I understand you think you covered this with “computers may be” but my point is different: the law is often dumb and you would be amazed at what politicians who don’t understand tech - or get paid not to understand it - will pull off)

            Edit: Downvotes from people who missed the point. You can’t say “LLMs will never be people” because you simply can’t guarantee your/our lawmakers won’t be that stupid.

      • lIlIlIlIlIlIl@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        32
        ·
        7 days ago

        It’s possible to leverage the same human quality called “hate,” which underpins racism. It’s the same ugly human behavior. You can call it whatever you want, it’s still ugly

        • zarkanian@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          7 days ago

          Humans have been hating software since the dawn of computing. Do you get upset when people say bad things about Windows? And if not, why is it different with LLMs?

        • insufferableninja@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          7 days ago

          We have a word for the concept you’re thinking of. It’s called bigotry. Racism is race-based bigotry. Anti-AI bigotry is reasonable and awesome, and is just called bigotry.

          • zarkanian@sh.itjust.works
            link
            fedilink
            arrow-up
            5
            ·
            7 days ago

            No, you can’t have bigotry against software. At least, not currently.

            Maybe in the future somebody will figure out how to make a sapient AI, like you see in science fiction, and then you can say that somebody is bigoted against it. We don’t have sapient AI, though, so this is simply prejudice.

    • lath@lemmy.world
      link
      fedilink
      arrow-up
      52
      arrow-down
      2
      ·
      7 days ago

      If it’s racism, it’s also slavery. Can’t have one without the other here.