• 0 Posts
  • 3 Comments
Joined 3 years ago
cake
Cake day: July 24th, 2023

help-circle
  • The scenario begins with AI agents undergoing a “jump in capability”.

    Might as well stop reading there. Another fluff piece about how useful and capable AI supposedly is, disguised as a doomsday scenario. I’m so sick of reading this bullshit. “Agentic AI” based on LLMs does not work reliably yet and very likely never will.

    If you complain about bugs in traditional (deterministic) software, you ain’t seen nothing yet. A probabilistic system such as an LLM might or might not book the correct flight for you. It might give you the information you have asked for or it might delete your inbox instead.

    As a consequence of a system being probabilistic, anything you do with it works or fails based on probabilities. This really is the dumbest timeline.



  • While I (almost) agree with the conclusion, there is a lot of bullshit and unproven assumptions in this blog post. I always cringe about the “AI is democratising software development” argument in particular. This is just wrong on so many levels. Software development is not an ivory tower. Everyone with an internet connection had access to all the resources to learn the necessary skills for free, for decades. Everyone who had an interest in actually learning that stuff and putting a bit of effort into it was able to do so. What LLMs provide is not democratising anything but advertising the illusion that everyone can produce software, effortless and without any skills whatsoever. Software development is much more than just churning out lines of code that seem to work. The Vibecoding approach is like trying to build your own car without having the skills and asking an AI to construct it as the sum of individual parts which all come from different car models from a Lada to a Ferrari. The end result might be drivable, but it will be neither secure nor efficient nor fast nor stable nor maintainable etc. A Frankenstein car. Everyone with half a brain would agree that’s not a good idea, however with LLMs people just do pretend its fine.