We’re introducing Llama 4 Scout and Llama 4 Maverick, the first open-weight natively multimodal models with unprecedented context support and our first built using a mixture-of-experts (MoE) architecture.

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    Seems pretty underwhelming. They’re comparing a 109B to a 27B and it’s kind of close. I know it’s only 17B active but that’s irrelevant for local users who are more likely going to be filtered by memory rather than speed.