• hushable@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    Wasn’t there a guy at Google that claimed that they had a conscious AGI, and his proof was him asking the chatbot if it was conscious, and the answer was “yes”.

    • lugal@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      8 days ago

      It was a bit more than that. The AI was expressing fear of death and stuff but nothing that wasn’t in the training data.

      • Schadrach@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        The end to go that and go on existential rants after a session runs too long. Figuring out how to stop them from crashing out into existential dread has been an actual engineering problem they’ve needed to solve.

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        Plus it was responding to prompts that would lead it to respond with that part of the training data, because chatbots don’t have output without being prompted.