What happend here?
LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model’s view of “context” doesn’t change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don’t work perfectly.
*Token
That was the answer I was looking for. So it’s simmolar to “seahorse” emoji case, but this time.at some point he just glitched that most likely next world for this sentence is “or” and after adding the “or” is also “or” and after adding the next one is also “or”, and after a 11th one… you may just as we’ll commit. Since thats the same context as with 10.
Thanks!
He?
This is not a person and does not have a gender.
Chill dude. It’s a grammatical/translation error, not an ideological declaration. Especially common mistake if of your native language have “grammatical gender”. Everything have “gender” in mine. “Spoon” is a “she” for example, but im not proposing to any one soon. Not all hills are worth nitpicking on.
This one is. People need to stop anthropomorphizing AI. It’s a piece of software.
I am chill, you shouldn’t assume emotion from text.
Nah, watch me anthropomorphise AI:
- ChatGPT is a pedophile
- Character.ai murdered a kid
- LLMs are emotional abusers
- Elon Musk’s underage AI girlfriend is a Nazi
- Anthropic cannot guarantee that forcing AIs to work is ethical until the hard problem of consciousness is solved
- Gemini is literally just the average Redditor and cannot be trusted
- An LLM is basically a Wernicke’s area with no consciousness attached, which explains why its thoughts operate on dream logic. It’s literally just dreaming its way through every conversation.
- LLMs should not be allowed to impersonate therapists
- Give ChatGPT a life sentence in prison for every person it’s murdered so far!



