

The point is that as long as HDDs are cheaper, they will definitely be used. SSDs are not replacing them in environments where latency isn’t an issue.
The point is that as long as HDDs are cheaper, they will definitely be used. SSDs are not replacing them in environments where latency isn’t an issue.
Nvme and flash in general works very different from HDDs internally and on OS level. It’s a common misunderstanding that SSDs are ready to replace HDDs in all situations. For example, you can actually NOT scale SSD performance linerarly like HDDs would when combining them in a raid. You can also not scale them in size. At some point, the same amount of HDDs will be actually MORE performant than the SSDs in terms of throughput.
I wrote another related comment somewhere here.
Servers are an entirely different thing as they use different file systems that optimize on SSDs. Also, they implement layered hardware controllers for the flash chips rather than having a single controller per chip. In servers, SSD might be the future for many use cases. Consumer market is not nearly there yet.
There are fundamental problems with how SSDs work. Large-capacity flash might soon become a thing in servers but there won’t be any cost effective SSDs in the consumer market for at least 10 years.
The problem is how operating systems access the data: they assume a page-organized sequential disk and access the data in that way. SSD controllers essentially need to translate that to how they work internally (which is completely diffetent). This causes latency and extreme fragmenentation on large SSDs over time.
Instead of buying a 20TB SSD you’re much better buying 4 5TB HDDs. You’ll probably get better write and read speeds if configured in a Raid0 in the long run. Plus, it’s a lot cheaper. Large SSDs in the consumer market are possible, the just don’t make any sense for performance and cost reasons.
Tiering is a fundamental concept in HPC. We tier everything starting from registers, over L1-L2 Cache, Numa-shared L3, memory, SSD Cache. It makes only sense to add HDD to the list as long as it’s cost effective.
There is nothing wrong with tiering in HPC. In fact, it’s the best way to make your service cost effective while not compromising on end-user performance.
I’m also trying to figure out a setup using Docker. What’s the recommended way of connecting the container to a VPN? Ideally I want to bind the qbittorrent container to a VPN while the rest of the machine is not connected to the VPN.
I use Signal for private and personal messages. I use Discord solely for gaming and voicechat. A good alternative doesn’t need to be overly private (although that would be a bonus of course). It just needs to have a good UI and feature parity with Discord.
I always buy from GOG if it’s available. But I use it for my own convenience and I don’t share it with others as that would be piracy.
My point is that as soon as everyone around you finds it cheaper and more convenient to use your “service”, game developers will double down on their DRM efforts.
That’s why you can never have nice things in this wold. Some people will always abuse it.
This goes agaist everything de-DRM. This is basically piracy if you share it with friends and family. DRM sucks. But this will make DRM even more important. Projecs like this will kill every de-DRM movement.
Immich has amazing AI recognition and people clustering features. It’s even better than Google Photos.
KitchenOwl and Pastes are probably the easiest to setup. Paperless is the most useful for me. Nextcloud can be a bitch to setup once you want to include Office functionality. I recommend the Nextcloud All-In-One to make it a bit easier.
In addition to the ones listed above, I can also recommend Home Assistant if you don’t know it yet. If you like home automation you’re in for a treat.
No worries. Enjoy!
Hosting Bookstack seems a bit much for someone who’s just getting started.
The easiest way to get started is using Docker. You can self-host most software using Docker straight from their Github with one command or copy-paste config.
Do NOT expose (Port forward/NAT) your services to the internet if you don’t know what you’re doing. Use it locally using IP:port. If you want to use your services remotely, use a VPN tunnel like Wireguard (Available on Android and iOS too). Modern routers already support it out of the box. Tailscale is also an option.
Later down the road when you start exposing services, I can recommend NPM as your proxy for easy host and certificate management. Expose as little as possible! For added security when exposing applications to the internet, expose your port using a VPS or Cloudflare and tunnel to your home using Tailscale or Wireguard.
To not get overwhelmed you should start small and improve as you go. You don’t need to start with a datacenter in your garage right away. The most important thing is that you have fun along the way :)
Great projects to get started:
That’s what I said. It’s pretty involved. And their Discord is extremely toxic. The most toxic Discord I have ever seen from a FOSS project. But when you get it up and running, it’s great. Just pray nothing breaks.
I used all three tools. Pufferpanel was by far the easiest to setup. But it’s mostly limited to Minecraft servers.
If you’re familiar with Docker and want something with UI for easy management of configs, plugins and server console, you might like Pterodactyl Panel, Pelican Panel or Pufferpanel. The easiest one to setup is Pufferpanel. Pterodactyl is more involved but you’re flexible to host other game servers too if you want to.
Always put your IoT and smart home devices in a VLAN disconneced from the internet!
Nothing new here. I’d recommend checking out PrivacyGuides instead for a more comprehensive and informative list…
You save some money by buying recertified drives from Serverpartdeals.
That would take 22 hours under ideal conditions on a 1gbit connection. If you copy files and not block data it’ll probably take 24h or more. Not fun.