

LLaMA can’t. Chameleon and similar ones can:
LLaMA can’t. Chameleon and similar ones can:
ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
Don’t know much of the stochastic parrot debate. Is my position a common one?
In my understanding, current language models don’t have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there’s some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.
You can generate synthetic data matching the distribution your transformer learned. You can use this dataset to train another model. As of now, that’s about it.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
Looks like instance normalization issues. See https://arxiv.org/abs/1912.04958
If you’re going to finetune a foundation model, it’d make sense to choose Mistral - once they release a 13B.
Also consider adding function calling to the home assistant use case.
I have a MediaWiki instance on my laptop (I’ve found the features of all other wikis/mindmaps/knowledge databases decisively insufficient after having a taste of MW templates, Semantic MediaWiki and Scribunto).
Also some smaller things like pihole-standalone, Jellyfin and dictd.
What I’ve ultimately converged to without any rigorous testing is: