I’ve been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don’t do that hard work the ai is going to overwhelm the reviewers with garbage
I’m writing code because it is often faster than explaining to the ai how to do it. I’m spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up
I totally get it. I’ve been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I’ve been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.
So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn’t even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard.
It was said it was because I was forcing the tool into decisions it couldn’t handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn’t follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn’t handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It’s also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it’s making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn’t touch, since we’ve made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.
I’m ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I’ll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it’s much better. Maybe the tools aren’t there now, but they are getting better all the time, so we’ll benefit any day now.
When I hear even experienced devs be enthusiastic about AI tools, I really feel like I’m going crazy. They suck a lot and aren’t useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?
I mean, yes, but also that’s a bit nuclear. Machine learning has real, fully good ethical and responsible uses… The problem is that society has yet to agree on the philosophy of what that is, and most business-first minded people have SUPER shitty, or even completely missing moral compasses.
So, effectively what you say, yes. But technically with much nuance and many clauses, not entirely.
We are clearly not ready as a species to handle it. Though, maybe we’ll burn the shit out of our hands in the next coming century enough to learn. But either way, it’s DEFINITELY not an “ignore all risk and run blindly at this shiny new flame” thing like a lot of people seem to think and treat it.
The thing is: “AI” can be a useful tool in the hands of an competent programmer, media creator and so forth… BUT it is literally the dark side of the force. Just to bring in the Yoda quote:
Luke:
… Is the dark side stronger?
Yoda:
No, no, no. Quicker, easier, more seductive.
The problem it allows a horde of fools to create software that is - at best - dysfunctional and at worst really dangerous. While, yes, it always was possible to fake photographs and create false video evidence of events it either required to have money, knowledge or both. Now any person can - with nearly no training - create realistic looking pictures and videos leading to god knows what.
And don’t let me get into the environmental aspects of this technology…
Perhaps, some day in the future when the hype is gone (and hopefully most of the shitty people pushing it) it might be possible to use this technology in the right way… but this hype and push for the usage of that technology will not go away until we push at least as hard against it as the proponents push towards it.
I’ve been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don’t do that hard work the ai is going to overwhelm the reviewers with garbage
So, what you’re saying is, you’re not writing code.
I’m writing code because it is often faster than explaining to the ai how to do it. I’m spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up
I totally get it. I’ve been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I’ve been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.
So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn’t even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard. It was said it was because I was forcing the tool into decisions it couldn’t handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn’t follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn’t handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It’s also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it’s making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn’t touch, since we’ve made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.
I’m ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I’ll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it’s much better. Maybe the tools aren’t there now, but they are getting better all the time, so we’ll benefit any day now.
When I hear even experienced devs be enthusiastic about AI tools, I really feel like I’m going crazy. They suck a lot and aren’t useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?
Not sure why you’re getting down votes, AI is a good tool when used properly.
Its not, its an abomination that should be wiped of the face of this earth and its shills should be shunned
I mean, yes, but also that’s a bit nuclear. Machine learning has real, fully good ethical and responsible uses… The problem is that society has yet to agree on the philosophy of what that is, and most business-first minded people have SUPER shitty, or even completely missing moral compasses.
So, effectively what you say, yes. But technically with much nuance and many clauses, not entirely.
We are clearly not ready as a species to handle it. Though, maybe we’ll burn the shit out of our hands in the next coming century enough to learn. But either way, it’s DEFINITELY not an “ignore all risk and run blindly at this shiny new flame” thing like a lot of people seem to think and treat it.
The thing is: “AI” can be a useful tool in the hands of an competent programmer, media creator and so forth… BUT it is literally the dark side of the force. Just to bring in the Yoda quote:
The problem it allows a horde of fools to create software that is - at best - dysfunctional and at worst really dangerous. While, yes, it always was possible to fake photographs and create false video evidence of events it either required to have money, knowledge or both. Now any person can - with nearly no training - create realistic looking pictures and videos leading to god knows what.
And don’t let me get into the environmental aspects of this technology…
Perhaps, some day in the future when the hype is gone (and hopefully most of the shitty people pushing it) it might be possible to use this technology in the right way… but this hype and push for the usage of that technology will not go away until we push at least as hard against it as the proponents push towards it.