• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle


  • Some features/plugins can be quite taxing on the system and in extreme cases it can slow the editor down to the point of being unusable. I’m a happy Neovim user with a LazyVim setup, but I experience this extreme slowdown for some JSON files and I haven’t looked into it yet to see what causes it.

    You can let your editor do the same compute intensive or memory hogging things that a GUI editor does. The fact that it runs in your terminal doesn’t make it lightweight by definition.


  • Millennial here. I’ve been consuming Reddit, and now Lemmy, almost exclusively on my phone and for me it’s card view all the way. Often the graphic content is more important than the title and opening posts only to find out it’s not funny or interesting feels like a waste of time. Only when I find a post interesting enough that I want to comment or see the comments, I open it. Instances or communities that I don’t like go on the blocklist.

    If I really need to use Reddit, I open old Reddit in the browser with an extension that turns it into a mobile friendly site with card view. The new design has always felt sluggish and bloated to me, but not because of the card view.


  • Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.

    I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.


  • The only time I can remember 16 GB not being sufficient for me is when I tried to run an LLM that required a tad more than 11 GB and I had just under 11 GB of memory available due to the other applications that were running.

    I guess my usage is relatively lightweight. A browser with a maximum of about 100 open tabs, a terminal, a couple of other applications (some of them electron based) and sometimes a VM that I allocate maybe 4 GB to or something. And the occasional Age of Empires II DE, which even runs fine on my other laptop from 2016 with 16 GB of RAM in it. I still ordered 32 GB so I can play around with local LLMs a bit more.


  • In my opinion it’s more useful to look at grams of protein per kcal. You can only eat so many calories in a day, so that dictates your protein intake for a large part. If you eat 2000 kcal worth of peanuts, you’d ingest 80 grams of protein. With chickpeas that would be 110 grams and with chicken breast 425 grams. You don’t eat just protein rich things, so the higher the value, the higher your chances of ingesting enough protein when combined with (other) vegetables, grains, rice, oil, etc.

    I know that some people will read this comment as if I’m promoting meat consumption, so let me add that I firmly believe that the world would be a better place if we ate a lot less meat. I’m just using these examples for demonstration purposes, as they’re all at the right side of the graph. It’s always an option to supplement with a plant based protein powder.


  • Exactly. I once visited a seed bank and there was some text along the lines of “we store these seeds at -60 °C which is 3 times as cold as your typical freezer” (for Americans: a freezer typically is about -20 °C). Yeah, no, that’s not how it works. With Kelvin you can actually do math like that, because 0 K is the absence of heat.


  • WSL 1 is a compatibility layer that lets Linux programs run on the Windows kernel by translating Linux system calls to Windows system calls, so in that sense I understand the name: it’s a Windows subsystem for Linux [compatibility]. It doesn’t use the Linux kernel at all. With WSL 2 they’re using a real Linux kernel in a virtual machine, so there the name doesn’t make much sense anymore.