I’m a person who tends to program stuff in Godot and also likes to look at clouds. Sometimes they look really spicy!
but i wanna have a website others can access too. I tried using VPNs for cool stuff already (like controlling my lil raspberry robot from work with my phone) but I want this website to be available to all the people…
should i just bite the bullet and rent some hosting service? Or is there still hope for me putting “setup home website server” on my resume?
that is a very fair argument, i really didn’t test it multiple times. i just find it interesting that the thing it jumps to is finance. it feels weirdly specific…
i think it looks kinda really cool… it’s just a really aggressive red ;(
:o
will u become like an ancient wizard? that’d be SO cool
:o
ur here!!! <3
😳🥹😖🥰
omygosh! hiiiiiiiii!!!
how do you do? srri for my late respons - i had family things to pursue…
Yea, I did this.
Can’t WAIT for vulkan support. Imagine the speed! It could be so much faster. Currently it just slows down the model to like - 2 tokens per second
I think she stepped on a crack.
Also HIII MAXXX!!! <3
Hmmm this sounds like a shitpost…
a virtual DPU on a GPU
sounds like download some RAM on this website
to me.
Like - aren’t NPUs just more tensor cores? More matrix multiplying machines? I can’t just simulate that and expect it to be faster…
i kno it’s evil to say, but when people genuinely have an american flag on their property i immediately assume it’s a shidpost or at the very least ironic. but it’s not, which makes it fun.
i totally agree… with everything. 6GB really is smol and, cuz imma crazy person, i currently try and optimize everything for llama3.2 3B Q4 model so people with even less GB VRAM can use it. i really like the idea of people just having some smollm laying around on their pc and devs being able to use it.
i really should probably opt for APIs, you’re right. the only API I ever used was Cohere, cuz yea their CR+ model is real nice. but i still wanna use smol models for a smol price if any. imma have a look at the APIs you listed. Never heard of Kobold Horde and Samba so i’ll have a look at those… or i go for the lazy route and chose depseek cuz it’s apparently unreasonably cheap for SOTA perf. so eh…
also yes! Lemmy really does seem anti AI, and i’m fine with that. i just say yeah companies use it in obviously dum ways but the tech is super interesting
which is a reasonable argument i think.
so yes, local llm go! i wanna get that new top amd gpu once that gets announced. so i’ll be able to run those spicy 32B models. for now i’ll just stick with 8B and 3B cuz they work quick and kinda do what i want.
could you define “right settings”?
I assuma Q4 and some context window Q8 aswell. Anything lese to tweak?
I just have a smol gtx1060 6gb VRAM, so i probably can’t fit it on mine and imma have to use cpu partly. but maybe other readers here can!
(I’m just a silly ollama user, not knowing anything more complex than the tokenizer… so yea, maybe put a lil infodump in here to make us all smarter please <3 )
EDIT: brucethemoose probably refered to this model named “Medius”. there is no 14B in the name.
i luv command R+ so very much and now i wanna try that smoler model but also the newly released r7b model was really not the best so i got sad…
ooh, leaked prompts? which ones are you talking about?
You are completely right and it is mostly about trial and error. I’d assume these courses mainlyl teach things you can do with the big bois, those being by the obvious big evil AI companies. It’s very much an overblown topic and companies pretend it’s actually a fancy thing to learn and be good at.
The linked guide just explains the basic concepts of few shot prompting, CoT and RAG and stuff. Even these terms I feel, make the topic more complicated than it is. Could literally be summarized to
Oh they totally will try. Microsoft is dum enough to try it, just like they are dum enough to try to train massive LLMs, and damn, they not be showin’ successes til now :)
I remember, I think I also saw that video by that one Linux/windows tech guy
They cannot do that. The foss community is too strong to fail, many people use GNU/Linux specifically because it’s not owned by EvilCo™ and EvilHoldings™.
naw, have your tried Envision? Just a few months ago I thought the same as you, but ever since I tried Envision, I have not opened SteamVR at all.
Envision (a foss VR client) works GREAT for VR on linux. In fact, I also have an index, so I can tell you that yes, it works very well.
I stumbled across this project via the lvra website, which is an amazing forum site about using your VR devices on GNU/Linux. I highly, highly recommend going over there and having a look around. It features guides to many common questions and helped me a ton.
Envision is really just an interface for monado, which does all the complex VR stuff like tracking and screen distortion.
Envision let’s you import the VR calibration from SteamVR right into Monados format. It uses a super small part of SteamVR in the background to perform the lighthouse tracking. But it’s very lightweight, especially when compared to SteamVR and Oculus’s VR interface.
Envision takes no time at all to boot up, it also lets you try out the “survive” lighthouse tracker, which is completely FOSS and doesn’t rely on SteamVR at all besides the calibration data (the tracking quality is noticably worse though and the IPD seems to be off, but give it a try!)
There are two hurtles to get through tho:
If you have any questions about it, there is a Discord server for linux-specific-vr stuff over on the lvra website.
TLDR: If you didn’t read any of this, just go to this page and have a look at Envision. There are all sorts of cool linux-specific VR things on there. Their discord helped me lots.
Could you maybe name some distros and DEs which have this feature pre-built?
I have used mint, Debian and Fedora and none of them seem to have this kind of feature.
sadly yea… i think its discord onli mostly ;(
i onli use matrix so dis is a no go for me <3
wud be totally nice n comfy if peeps were to create matrix space or so <3 <3 <3