I don’t care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

    • SmokeyDope@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      23 days ago

      Cool, page assist looks neat I’ll have to check it out sometimes. My llm engine is kobold.cpp, and I just user the openwebui in internet browser to connect.

      Sorry I don’t really have good suggestions for you beyond to just try some of the more popular 1-4bs in a very high quant if not full f8 and see which works best for your use case.

      Llama 4b, mistral 4b, phi-3-mini, tinyllm 1.5b, qwen 2-1.5b, ect ect. I assume you want a model with large context size and good comprehension skills to summarize youtube transcripts and webpage articles? At least I think thats what the add-on you mentioned suggested was its purpose.

      So look for models with those things over ones that try to specialize in a little bit of domain knowledge.

      • thickertoofan@lemm.eeOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        21 days ago

        I checked mostly all of em out from the list, but 1b models are generally unusable for RAG.