List of icons/services suggested:

  • Calibre
  • Jitsi
  • Kiwix
  • Monero (Node)
  • Nextcloud
  • Pihole
  • Ollama (Should at least be able to run tiny-llama 1.1B)
  • Open Media Vault
  • Syncthing
  • VLC Media Player Media Server
  • SmokeyDope@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    Thank you thats useful to know. In your opinion what context size is the sweet spot for llama 3.1 8B and similar models?

    • brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      4 core i7, 16gb RAM and no GPU yet

      Honestly as small as you can manage.

      Again, you will get much better speeds out of “extreme” MoE models like deepseek chat lite: https://huggingface.co/YorkieOH10/DeepSeek-V2-Lite-Chat-Q4_K_M-GGUF/tree/main

      Another thing I’d recommend is running kobold.cpp instead of ollama if you want to get into the nitty gritty of llms. Its more customizable and (ultimately) faster on more hardware.

      • SmokeyDope@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 months ago

        Thats good info for low spec laptops. Thanks for the software recommendation. Need to do some more research on the model you suggested. I think you confused me for the other guy though. Im currently working with a six core ryzen 2600 CPU and a RX 580 GPU. edit- no worries we are good it was still great info for the thinkpad users!