Backstory here: https://www.404media.co/ars-technica-pulls-article-with-ai-fabricated-quotes-about-ai-generated-article/
Personally I think this is a good response. I hope they stay true to it in the future.
Link to the archived version of the article in question.
I actually like the editor’s note. Instead of naming-and-shaming the author (Benj Edwards), it’s blaming “Ars Technica”. It also claims they looked for further issues. It sounds surprisingly sincere for corporate apology.
Blaming AT as a whole is important because it acknowledges Edwards wasn’t the only one fucking it up. Whatever a journalist submits needs to be reviewed by at least a second person, exactly for this reason: to catch up dumb mistakes. Either this system is not in place or not working properly.
I do think Edwards is to blame but I wouldn’t go so far as saying he should be fired, unless he has a backstory of doing this sort of dumb shit. (AFAIK he doesn’t.) “People should be responsible for their tool usage” is not the same as “every infraction deserves capital punishment”; sometimes scolding is enough. I think @totally_human_emdash_user@piefed.blahaj.zone’s comment was spot on in this regard: he should’ve taken sick time off, but this would have cost him vacation time, and even being forced to make this choice is a systemic problem. So ultimately it falls on his employer (AT) again.
deleted by creator
deleted by creator
Benj Edwards, the author responsible, has posted his side.
Why would he play with an AI toy while he’s doing his job and he’s sick?
Of course something was bound to happen.
You can’t empathize with someone having to work while sick and wanting to use a tool to make that work slightly easier?
I tend to empathize with the victims of plagiarism over the perpetrators of it.
I read them regularly for years until they started banning folk in the forums for pointing out how problematic it is for Eric Berger to still be slobbering on Elon’s knob.
Don’t think I’m missing much, though I do miss Beth Mole.
It is 23.43, and I can’t analyze this tonight. Ars has been good for a long while, and I enjoy their reporting. To have to reassess this is disappointing, but I’ve already had to feel this with the NYT and WaPo. Not exactly a huge loss here. But I want to fully investigate what happened ahead of reaching a conclusion.
Rest assured, I will reach a conclusion. I don’t think I’ll like the one I think I’ll find, but that’s journalism for you. I will withhold judgment until I’ve had a chance to fully examine what happened here.
Why write this comment when you don’t have anything to say? I’m puzzled why I should care that you did not analyse this yet.
There is no reason for you to care. I am informing users familiar with my writing and methods that this is now on my radar, but I can’t yet do it justice. I’m being honest about not being ready to perform analysis.
Ah so you’re important and have followers waiting what you have to say and you don’t want to keep them in the dark about your schedule?
Sorry, I didn’t want to disturb a Main Character. Thank you for your kind response.
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy. We have reviewed recent work and have not identified additional issues. At this time, this appears to be an isolated incident.
Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
We regret this failure and apologize to our readers. We have also apologized to Mr. Scott Shambaugh, who was falsely quoted.
Nothing about who put it in there or what you’re doing to them?
We are reinforcing our editorial standards following this incident.
It sounds like they will be reminding their team not to do that and scrutinizing articles in the near future
Someone deserves to be fired. Just imagine you’re paying someone to do a job and they just 100% completely outsource it to a machine in 5 seconds and then goes home.
He wrote the article himself, he just got mixed up when experimenting with using an AI tool to help him extract quotes from a blog entry. (He is the head AI writer, so learning about these tools is his job.) It was nonetheless his failure to check the quotes he was copying from his note to make sure that he got them right… but an important bit of context is that he had COVID while doing all this. Now, arguably he should have taken sick time off instead of trying to work through it (as he admits), but this would have cost him vacation time, and the fact that he even was forced into making this choice is a systemic problem that is not being sufficiently acknowledged.
he had COVID while doing all this
I’ve had COVID before, it sucks but it doesn’t make you stupid.
he just got mixed up when experimenting
I don’t believe him.
Why don’t you believe him?
Because it’s completely ridiculous. What if we was just phoning it in? He’s just going to come out and say it?
So in other words, you are just making an assumption.
Calm down, that’s not what happened
Woah, they take the blame and apologize. This is not often seen and commands respect.









