Four months ago I asked if and how people used AI here in this community (https://lemmy.world/post/37760851).

Many people said that didn’t use it, or used only for consulting a few times.

But in those 4 months AIs evolved a lot, so I wonder, is there people who still don’t use AI daily for programming?

  • Hetare King@piefed.social
    link
    fedilink
    English
    arrow-up
    40
    ·
    9 days ago

    I don’t, and probably never will. A whole bunch of reasons:

    • The current state of affairs isn’t going to last forever; at some point the fact that nobody’s making money with this is going to catch up, a lot of companies providing these services are going to disappear and what remains will become prohibitively expensive, so it’s foolish to risk becoming dependent on them.
    • If I had to explain things in natural language all the time, I would become useless for the day before lunch. I’m a programmer, not a consultant.
    • I think even the IntelliSense in recent versions of Visual Studio is sometimes too smart for its own good, making careless mistakes more likely. AI would turn that up to 11.
    • I have little confidence that people, including myself, would actually review the generated code as thoroughly as they should.
    • Maintaining other people’s code takes a lot more effort than code you wrote yourself. It’s inevitable that you end up having to maintain something someone else wrote, but why would you want all the code you maintain to be that?
    • The use-cases that people generally agree upon AI is good at, like boilerplate and setting up projects, are all things that can be done quickly without relying on an inherently unreliable system.
    • Programming is entirely too fun to leave to computers. To begin with, most of your time isn’t even spent on writing code, I don’t really get the psychology of denying yourself the catharsis of writing the code yourself after coming up with a solution.
    • qupada@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      9 days ago

      You wrote this all a lot better than I could have, but to expand on 2) I have no desire whatsoever to have a “conversation” (nay, argument) with a machine to try and convince/coerce/deceive/brow-beat (delete as appropriate) it into maybe doing what I wanted.

      I don’t want to deal with this grotesque “tee hee, oopsie” personality that every company seems to have bestowed on these awful things when things go awry, I don’t want its “suggestions”. I code, computer does. End of transaction.

      People can call me a luddite at this point and I’ll wear that badge with pride. I’ll still be here, understanding my data and processes and writing code to work with them, long after (as you say) you’ve been priced out of these tools.

  • kindnesskills@literature.cafe
    link
    fedilink
    arrow-up
    21
    ·
    9 days ago

    Of course.

    My reasons for not using AI are the same as they were four months ago and will be the same in four months, regardless of what the models can or can’t do.

    Ask again in four years.

    • CodenameDarlen@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      9 days ago

      What are your reasons?

      The place you work don’t force you to use it?

      I’ve been noticing all companies are forcing devs to use AIs to be more productive, even for simple things like write git commits.

      • kindnesskills@literature.cafe
        link
        fedilink
        arrow-up
        15
        ·
        9 days ago

        I noticed how quickly my own skills started deteriorating when trying to work with it. I’m trying to build my skills, not outsource them.

        I also don’t love the environmental impact, nor the immorality of how they got/get their training sets for the base models.

        If my work tried to force me to use it, I would be looking to change employer. Or lie and say I use it. But our AI use is heavily regulated and generally disencouraged, so luckily no issues there.

        • CodenameDarlen@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          20
          ·
          9 days ago

          I don’t think your code being used for training is a concern anymore. They’ll eventually keep finding new codes until it reaches its peak. Refusing to share your code for training will just postpone the inevitable, AI code will improve to its peak sooner or later.

          • kindnesskills@literature.cafe
            link
            fedilink
            arrow-up
            6
            ·
            9 days ago

            You replied to only one of my points, and that’s not even what I said…

            They train new models on base models, and I’m talking about how they scraped the internet without permission or how websites sold their users data without compensation and how no one was ever given any opportunity to opt out of sharing your work and your words to train these base models on.

            Without that grand scale theft we would have no base models anywhere near what we have now.

            I’m not opposed to willingly sharing, I’m opposed to profiting from stealing.

            • CodenameDarlen@lemmy.worldOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              10
              ·
              9 days ago

              Your mistake is to think that I want to prove something, I don’t want to mention all your points, this is just a comment, not a scientific discussion.

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    20
    ·
    9 days ago

    Please, continue to “use AI daily”. Rot your brain, see if I care.

    If my competitors want to shoot themselves in the foot that’s fine by me, I won’t stop them.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    19
    ·
    9 days ago

    But in those 4 months AIs evolved a lot

    Has it really? I don’t feel like it’s much different for programming compared to 4 months ago.

    • CodenameDarlen@lemmy.worldOP
      link
      fedilink
      arrow-up
      5
      arrow-down
      14
      ·
      9 days ago

      Yes, it evolved almost exponentially in these 4 months. It’s just bizarre what recent models can do and how consistent they do it.

      If you never tried it, of course, you won’t know the difference. But for those who tried surely saw a huge improvement.

      • PonyOfWar@pawb.social
        link
        fedilink
        arrow-up
        9
        ·
        9 days ago

        It’s not that I’ve never tried it, I’ve dabbled in it consistently over the last few years. If you had said there was a major difference compared to 2 years or maybe even a year ago, sure. In the last 4 months, I guess we’ve gotten stuff like Claude 4.6, which saw an increase in coding performance by 2.5% according to SWE benchmarks. An improvement, sure, but certainly not an exponential one and not one which will fix the fundamental weaknesses of AI coding. Maybe I’m out of the loop though, so I’m curious, what are those exponential improvements you’ve seen over the last 4 months? Any concrete models or tools?

        • CodenameDarlen@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          7
          ·
          9 days ago

          I decided to try Qwen 3.5 Plus via Qwen Code CLI (Gemini CLI fork) and it’s bizarre what it can do.

          It can figure out when it’s struggling to something, look on the internet for questions and docs to understand things better. It takes a lot of actions by itself, not like that bad models from 4 months ago that gets stuck on endless thinking and tweaking and never fix anything.

          Recent models are thinking each time more like human programmers.

  • thedeadwalking4242@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    9 days ago

    I use it at work because my colleagues only use it so it’s the only way I can deal with the LLM slop without total killing myself. And it’s horrendously bad still. I fucking hate it. Makes the worst fucking decisions.

    I’m considering a career change honestly. I can’t stand this shit anymore.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        9 days ago

        Opus 4.6 Gemini 3 pro

        Name em Ive tried em. Its all so sub par for anything beyond a one off script.

        People have the impression they should be outputting 10x with them so they abuse them into doing more “thinking” Then they should.

        Edit: it’s fucked the work culture more then it already was fucked when it comes to what defines software quality and expertise

  • ThotDragon@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    12
    ·
    9 days ago

    I don’t, it’s not better than simply thinking about things myself. There isn’t institutional pressure to use it and if there was I would simply lie and not use it.

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    11
    ·
    9 days ago

    Never used it. Don’t see any reason to. I just type stuff in my IDE. Works like a charm.

    Most of the time I’m not writing large volumes or boilerplate code or anything, I’m making precise changes to solve specific problems. I doubt there’s any LLM that can do that more effectively than a programmer with real knowledge of the code base and application domain.

    I also work on open source software and we haven’t seen a meaningful uptick in good contributions due to AI over the last few years. So if there’s some mythical productivity increase happening, I’m just not seeing it.

  • Avicenna@programming.dev
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    9 days ago

    More like a manual. Google has become really shitty for complex queries, LLMs can find relevant keywords, documents much realiably. Granted, if you are asking questions about niche libraries it hallucinates functions quite often so I never ask it to write full pieces of code but just use it more like a stepping stone.

    I find it amusing how shamelessly it lies about its hallucinations though. When I point out that a certain function it makes up does not exist the answer is always sth of the form “Sorry you are right that function existed before version X / that function existed in some of the online documentation” etc lol. It is like a halluception. If you ask it to find some links regarding these old versions or documentations they also somehow don’t exist anymore.

  • skip0110@lemmy.zip
    link
    fedilink
    arrow-up
    8
    ·
    9 days ago

    I have consistently been trying to use AI for the actual tasks I need to complete for work over the last year or so (we are gently encouraged to try it, but thankfully not forced). I have found it to be “successful” at maybe 1 in 10 tasks I give it. Even when successful, the code quality is so low I edit heavily before it’s pushed and attributed to me.

    I think the problem I have is I rarely work on boilerplate stuff.

  • TheAgeOfSuperboredom@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    9 days ago

    I recently had to tell one of my juniors to turn off his AI tools. His code was just all over the place and difficult to review. He still has a lot to learn, but I’ve already seen an improvement now that he actually has to be a bit thoughtful.