

I believe you can run 30B models on single used rtx 3090 24GB at least I run 32B deepseek-r1 on it using ollama. Just make sure you have enought ram > 24GB.
I believe you can run 30B models on single used rtx 3090 24GB at least I run 32B deepseek-r1 on it using ollama. Just make sure you have enought ram > 24GB.
If you want to use supercomputer software, setup SLURM scheduler on those machines. There are many tutorials how to do distributed gpu computing with slurm. I have it on my todo list.
https://github.com/SchedMD/slurm
https://slurm.schedmd.com/
There is no difference in core of those services, all of them are visual interfaces for file versioning software called git. You can easily use git without any of these services. If you’re starting to explore those technologies I personally recommend getting used to git and ssh. You can read more here on how to use git and ssh https://git-scm.com/book/en/v2/Git-on-the-Server-Getting-Git-on-a-Server
Answers:
I use text files and grep
Looking at names I can see them making feeds free like openai is making ai open.
Replace OS before they replace you. Linux - the last adventure.
It’s just a small change from suggesting words to suggesting sentences suggestions, and people look like somebody took their universe. Wake up - 90% of you are already clicking on word suggestions from your phone keyboards, and sentence suggestions are just sight expansion. At least 70% of us are programmed corporate generated puppies. Technology already won, people got betrayed for robots. Drones are more dangerous than tanks and planes. We got space satelitte network and big server magazines gathering all our moves 24/7. Last step is just ahead of us - robots with hands in our houses that can execute order 66 and choke us all to death.
I run this one. https://ollama.com/library/deepseek-r1:32b-qwen-distill-q4_K_M with this frontend https://github.com/open-webui/open-webui on single rtx 3090 hardware 64gb ram. It works quite well for what I wanted it to do. I wanted to connect 2x 3090 cards with slurm to run 70b models but haven’t found time to do it.