Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • JustTesting@lemmy.hogru.ch
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    4 days ago

    10 tests per model seems like way too little and they should give confidence intervals…

    the 10/10 vs. 8/10 is just as likely due chance than any real difference. But some people will definitely use this to justify model choice.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      It should get it wrong 0% of the time because it is a computer that should have predictable results about basic things like requiring a car to be present to be washed.

      • JustTesting@lemmy.hogru.ch
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I’m not talking about the quality of LLMs (they suck, in so many different ways…).

        I’m criticizing the experiment setup, it is not really statistically sound. Doing 10 tests each with 52 different models is almost bound to have one model be correct 100% of the time (even if the true probability is closer to 50%), by pure chance. Doing 100 tests each might yield very different results with none of them answering correct 100% of the time. Or put another way, the p-values of the tests performed are pretty high, not <0.05, so the results don’t really say what they purport to say.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I think the overall poor showing is pretty damning even if one or two models accidentally stumbled into being right 10/10 times.