The problem is simple: consumer motherboards don’t have that many PCIe slots, and consumer CPUs don’t have enough lanes to run 3+ GPUs at full PCIe gen 3 or gen 4 speeds.

My idea was to buy 3-4 computers for cheap, slot a GPU into each of them and use 4 of them in tandem. I imagine this will require some sort of agent running on each node which will be connected through a 10Gbe network. I can get a 10Gbe network running for this project.

Does Ollama or any other local AI project support this? Getting a server motherboard with CPU is going to get expensive very quickly, but this would be a great alternative.

Thanks

  • Overspark@feddit.nl
    link
    fedilink
    English
    arrow-up
    32
    ·
    3 days ago

    A 10 Gbps network is MUCH slower than even the smallest oldest PCIe slot you have. So cramming the GPUs in any old slot that’ll fit is a much better option than distributing it over multiple PCs.

    • litchralee@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 days ago

      I agree with the idea of not using a 10 Gbps network for GPU work. Just one small nitpick: PCIe Gen 1 in an x1 slot is only capable of 2.5 GTransfers/sec, which translates to about 2 GBits/sec, making it about 5x slower than a 10 Gbps line-rate network.

      I sincerely hope OP is not running modern AI work on a mobo with only Gen 1…

      • marauding_gibberish142@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Thanks for the comment. I don’t want to use a networked distributed cluster for AI if I can help it. I’m looking at other options and maybe I’ll find something

    • marauding_gibberish142@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Your point is valid. Originally I was looking for deals on cheap CPU + Motherboard combos that will offer me a lot of PCIe and won’t be very expensive, but I couldn’t find anything good for EPYC. I am now looking for used supermicro motherboards and maybe I can get something I like. I don’t want to do networking for this project either but it was the only idea I could think of a few hours back