The problem is simple: consumer motherboards don’t have that many PCIe slots, and consumer CPUs don’t have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    consumer motherboards don’t have that many PCIe slots

    The number of PCIe slots isn’t the most limiting factor when it comes to consumer motherboards. It’s the number of PCIe lanes that are supported by your CPU and the motherboard has access to.

    It’s difficult to find non-server focused hardware that can do something like this because you need a significant number of PCIe lanes to accommodate your CPU, and your several GPUs at full speed. Using an M.2 SSD? Even more difficult.

    Your 1 GPU per machine is a decent approach. Using a Kubernetes cluster with device plugins is likely the best way to accomplish what you want here. It would involve setting up your cluster, installing the drivers for your GPU (on each node) which then exposes the device to the system. Then when you create your Ollama container, in the prestart hook, ensure you expose your GPUs to the container for usage.

    The issue with doing this, is 10Gbe is very slow compared to your GPU via PCIe. You’re networking all these GPUs to do some cool stuff, but then you’re severely bottle-necking yourself with your network. All in all, it’s not a very good plan.

    • marauding_gibberish142@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      I agree with your assessment. I was indeed going to run k8s, just hadn’t figured out what you told me. Thanks for that.

      And yes, I realised that 10Gbe is just not enough for this stuff. But another commenter told me to look for used threadripper and EPYC boards (which are extremely expensive for me), which gave me the idea to look for older Intel CPU+Motherboard combos. Maybe I’ll have some luck there. I was going to use Talos in a VM with all the GPUs passed through to it.