Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • MangoCats@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    18 hours ago

    I feel that a lot of what is improving in the recent batch of model releases is the vetting of their training data - basically the opposite of model collapse.

    Nothing requires an LLM to train on the entire internet.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      That’s an excellent point! On that topic I recently listened to an interview of the founder of EleutherAI, who focuses on training small language models. She said they were able to train a 1B parameters reasoning model with 50K Wikipedia articles and carefully curated RL traces. The thing could run in your smartphone and is at parity with much larger models trained on trillions of tokens.

      She also scoffed at Common Crawl and said it contained mostly cookies and porn. She had a kind of attitude like “no wonder the big labs need to slurp trillions of tokens when the tokens are such low quality”. Very interesting approach, if you understand french I can only recommend the interview.