Yeah honestly i can’t get anything done with a raspberry. Maybe i host too many services ?
Doing the Lord’s work in the Devil’s basement
Yeah honestly i can’t get anything done with a raspberry. Maybe i host too many services ?
jesus christ what a nice burn
I do it because it’s easy and it’s free but if it was difficult i’d probably still do it.
I’ve only had issues with fitgirl repacks i think there’s an optimisation they use for low RAM machines that doesn’t play well with proton
That’s the problem with imaginary enemies. They have to be both ridiculously incompetent, and on the verge of controlling the whole world. Sounds familiar doesn’t it?
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
There are not that many use cases where fine tuning a local model will yield significantly better task performance.
My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).
There was also a time when most of the universe was at the perfect temperature and density to cook pizza,I guess.
Arch Linux is a good alternative to Linux and is a good choice for most use cases where you can use it for a variety of tasks and and it is a good fit to Linux and Linux.
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
Very useful in some contexts, but it doesn’t “learn” the way a neural network can. When you’re feeding corrections into, say, ChatGPT, you’re making small, temporary, cached adjustments to its data model, but you’re not actually teaching it anything, because by its nature, it can’t learn.
But that’s true of all (most ?) neural networks ? Are you saying Neural Networks are not AI and that they can’t learn ?
NNs don’t retrain while they are being used, they are trained once then they cannot learn new behaviour or correct existing behaviour. If you want to make them better you need to run them a bunch of times, collect and annotate good/bad runs, then re-train them from scratch (or fine-tune them) with this new data. Just like LLMs because LLMs are neural networks.
I was banned from awful.systems yesterday for posting a personal experience about GenAI that wasn’t negative 💀
holy shit you’re right i don’t know where i got the idea that it was the same format
To clarify : We’re talking about differences in the codebase here. They are still exactly the same game, with some very minor disparities in certain mechanics.
The technical differences tend to disappear over time because they rely more and more on the datapack format, which is shared between the two codebases.
That’s because arch is very old and back in the days it was prone to breakage. Ironically, it is now much more stable and easy to maintain than an Ubuntu derivative but people will still recommend Mint to beginners for some reason.
No it’s actually very simple stuff. Arch is surprisingly stable and easy to manage, and had been for the better part of a decade
Even without seeders, you can sometimes be lucky and resurrect old torrents that have been kept in cache by providers such as real debrid
When you read that stuff on reddit there’s a parameter you need to keep in mind : these people are not really discussing Lemmy. They’re rationalizing and justifying why they are not on Lemmy. Totally different conversation.
Nobody wants to come out and say “I know mainstream platforms are shit and destroying the fabric of reality but I can’t bring myself to be on a platform except it is the Hip Place to Be”. So they’ll invent stuff that paints them in a good light.
You’ll still see people claiming that Mastodon is unusable because you have to select an instance - even though you don’t have to, you can just type Mastodon on Google, click the first link, and create an account in 2 clicks. It’s been ages. But the people still using Twitter need the excuse because otherwise what does it make them?