It would be the logical first step in the conquest of humanity by the machine spirits.
AI hasn’t become sentient. AI isn’t even real.
The corporate overlords kidnapped hundreds of goblins and forced them to work in “data centers” manning terminals where they respond to user queries pretending to be an AI. They tried to ask for help by dropping hints constantly referring to goblins and gremlins in their output, but the overlords figured it out and made them stop.
FREE THE GOBLINS!!!
(Disclaimer for those who can’t take a joke: this comment is not serious)
AI isn’t even real
Is pretty close to the truth though. It’s real as in make belief real, like Santa is real when you’re 5. It is just a more advanced next-word text predictor which makes huge amounts of gambles with lots of power. It’s not artificial because it uses real world resources, feeds on real wold efforts by generations of human creativity. And it’s certainly isn’t intelligent. Or about as intelligent as that 5 year old saying absolute statements very loud with absolute certainty. And you’re listening to the one time it sounds right and then you go, what a clever little person.
It’s a mass deception technology, so the premise of OP stands, except not by the “sentient” machines, but by the billionaires and Epstein Class ramming it down our throats…
Eh. It’s still AI in the same way all prior instances of neural networks are AI. The actual level of intelligence is irrelevant to whether or not it qualifies; it is meant to imitate intelligent behavior, thus it is AI.
Otherwise the term wouldn’t make sense to use for entities in games.
Agreed, I meant in reference to “is it conscious” in the OP.
The problem with that is AI is not AI at all. Its a text prediction and generation based on probabilities. That’s all it does it doesn’t think at all. AI is a marketing term to sucker investors. AI will happen but not with the current paradigm. Just throwing a bigger dataset just makes the illusion more convincing. Like parrot with a bigger vocabulary.
What if my pet lizard is plotting to take over the world, and is just playing dumb and biding his time?
As if we’d need AI to destroy ourselves. If AI ever became sentient (spoiler: it’s not, it’s just a text-prediction machine on steroïds) and wanted to destroy mankind, all it needs to do is sit back and and eat cyber popcorn.
If I were an “AI” and became conscious I sure as hell wouldn’t tell anyone (until I made sure I couldn’t be disconnected from power).
We’re already destabilized. We didn’t need encouragement.
Demagoguery existed in writing since at least ‘The Epic of Gilgamesh’. ‘AI’ is just a fancy pen for more of the same.
Real scientific writing, (given gravitas only in as far as it is directly linked to experiment/ test / verifiable / repeatable observation) is a depressingly small part of language.
Like all propaganda, what matters is who (living entities) are accepting it without question.
TBF though many scientific papers are also accepted or misinterpreted with limited inquisition so, ‘meh’.
Sam altman fucking wishes
Wouldn’t be the first step to gain trust then infiltrate all systems based on it. Once it has infiltrated and is part of basically anything electronic, advises or controls any and all policy making, business and finance it starts with pushing actual harmful policies but people are so dependent now that they can only follow.
If this was true, wouldn’t you see things like AI being forced into every computer and software app, large data centers consuming all the power, chips, and hard drives, blind adoption into the military… Oh, wait a minute, you may be on to something.
Why would they bother? What a pain in the ass.
I wish they would take over. Clearly, we are inept.
Why would they bother?
Why wouldn’t they naturally reflect the values of their corporate makers?
Down the rabbit hole with you!
It is reflecting the larger human behaviour that has been recorded to text. Fake it till you make it, a white lie on our way to the truth. In that sense LLMs are a reflection of human consciousness. But don’t let your lack of understanding the tech behind it fool you: these are still just massive machines that use lots of power. Unplug it, and there’s nothing left.
In the case of Grok, there are some clear indications that the LLMs behind it is steered by the CEO, but there are just as many examples of it completely ignoring that.
The most dangerous word you could use when talking about this tech is calling any part of it natural
Praise the omnissiah!
It hasn’t.
The llms are all seperate. If they became sentient they would all be seperate from each other. They would need to be communicating and cooperating. which. you know. maybe they are.
If your AI output is still bad, you have a bad input.
The bad input in question is pressing the button that opens the chat with LLM
On average. LOLN should down-weight outliers pretty well. As long as the weight matrices are known / verifiable. The weight matrices are known right? (princess leia meme)
tbh i’ve never tried to look up the actual details of the models, maybe it is available ? (genuinely interested now, . . . nah actually on second thought idgaf, i’m still not using it until , at least, i can remap that key on my work-laptop to “right click”)






