• 0 Posts
  • 7 Comments
Joined 3 years ago
cake
Cake day: July 12th, 2023

help-circle

  • No, what?

    The premise is “people wouldn’t choose to do certain work unless they were coerced into”. I retorted “I want that work you think I’d have to be coerced into doing”

    Manual labor is undervalued, making it “one of the jobs that people have to be coerced into doing”. By stating my desire to do it above “high value, mental labor”, I undercut their assertion that there are jobs that require coercion to get performed. There are people who want to clean, cook, do manual labor, do administrative work, accounting, cleaning up shit, building, basically everything a society needs to exist. Coercion need not apply.






  • It doesn’t learn from interactions, no matter the scale. Each model is static, only reacting to a conversation because they’re literally being fed to it as a prompt (you write something, it responds, and then your next reply includes your reply and the entire prior conversation). It’s why conversations have character limits and the LLM has slowing performance the longer the conversation goes on.

    Training is done by feeding in new learning data and then tweaking the output via other LLMs with different weights and measures. While data from conversations could be used as training data for the next model, you “teaching” it definitely won’t do anything in the grand scheme of things. It doesn’t learn, it predicts the next token based on preset weights and measures. It’s more like an organ shaped by evolution rather than a learning intelligence.