This is again a big win on the red team at least for me. They developed a “fully open” 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) […]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B […].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other “fully open” models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    95
    ·
    19 hours ago

    Properly open source.

    The model, the weighting, the dataset, etc. every part of this seems to be open. One of the very few models that comply with the Open Software Initiative’s definition of open source AI.

    • foremanguy@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      19 hours ago

      Look at the picture in my post.

      There was others open models but they were very below the “fake” open source models like Gemma or Llama, but Instella is almost to the same level, great improvement