I am sorry if this the wrong community to ask in, while I have been on Lemmy for more than a year now, I am still learning my way around, and this seems like a relatively active community in a relevant area.
Right, on to my questions!
I am planning to build a NAS over the summer, at the moment all of my personal photos are stored on a single mechanical 2TB Seagate drive that is about 4 years old.
I have other media on another drive that is older but larger, all in all I expect that I have about 8TB of data that I care about.
I am working as a 365 admin, and have been the main Linux admin at my last place of work, I am also a hobby photographer in my spare time.
Currently, I am looking at using either the N4, the N3 or the N5 from Jonsbo, the N4 is a beautiful case!
I am thinking of running four 6TB drives in a softraid like this:
Linux > MDAM (raid 5) > LVM > ext4
My thinking is that I will probably need to migrate to new drives every X years or so, and with the LVM, I can just add a new external (larger) drive to the VG, and move the LV from the old drives to the external drive, remove the old raid drives from the VG, put in new drives, setup MDAM, add the raid to the VG and move the LV back to the raid.
Am I overthinking this? this NAS will be my main media machine and will probably see a decent ammount of use over the years.
I have thought about setting up OpenMediaVault or TrueNAS as the OS, but having never run them, I wonder if they will be as flexible as I want them to be.
I am currently considering just running Debian and setting this up from the terminal, but I am not a super fan of SMB settings in the terminal, I did consider using cockpit as a web admin tool once it is setup to monitor the system, can I do the SMB config from that?
I am apprehensive about a manual SMB config, as the last time I did it, it was a weird mess for the team who had to use it…
I am more familiar with AMD hardware over Intel, and I am looking at the old AM4 plattfrom, but what I don’t know is how much power a homebuilt NAS will use in standby or when active.
You might want to check out the self-hosted communities on Lemmy for more info.
If you want to use Cockpit, the 45drives Cockpit modules make dealing with SMB easier. I think TrueNAS is a better option. If you want more flexibility, then Proxmox VE is a popular choice.
deleted by creator
TrueNAS Scale is a good option. ZFS is a very resilient filesystem. I lost a lot of data to a software raid in the past that didn’t checksum the data and now I have an affinity for zfs. I believe they have added the ability to grow with larger drives as well - just disconnect drive an and insert new larger drive b, let it resilver, and once you’ve got them all replaced it grows the volume. Set it up, see how you like it, and move your data over if you do.
You may be different, but given that your current situation is a couple drives sitting on a desk for 4+ years, I wouldn’t worry about expansion so much. I built a nas a while ago and figured I’d upgrade it, and I haven’t. Until it’s full, it’ll keep going.
Also check price/gb before settling on 6TB. That’s small.
I second Truenas scale. Simple set up, good support, runs docker apps, can dig into command line if you so please or just use the GUI
I know about TrueNAS, but have never run it on dedicated hardware, at most I have run it in a Virtualbox to test it out.
Though to be fair I have never worked with MDAM either, but I did work with ext4 and xfs when I was a Linux admin, the ext4 filesystems I ran, was setup in an LVM, but to be fair, it was just VMs and I never had to consider the hypervisors raid as it was another team dealing with that.
It’s pretty good to work with, and it’s got pretty mainstream support because the OS isn’t FreeBSD anymore, and it supports docker. As far as setting up the array you plug in the disks and tell it to make a pool. Pretty easy. Then you can subdivide as needed.
TrueNAS has some built in support for backing up to various clouds via rsync, or you can sync at the pool level to a remote server.
I used TrueNAS with all of my random old drives in a media center case, it works well. I am uneducated in the technicalities of sysadmin etc, but it was easy to figure it out basic setup for a dufus. It looks like there is way more in there to mess with if you had the knowhow.
Do you really want to run this yourself? If the data is that important to you, I’d probably rather invest in something like a Synology NAS. They make sure that updates won’t kill your data, everything stays secure and you don’t have to mess with
mdadm
or LVM yourself.Under the hood, Synology’s SHR also uses bog-standard MD and LVM. So even if the NAS dies, you can still read your data on any Linux machine. But you won’t have to think about updates potentially breaking anything and it has a plethora of features around storage management that you can configure with a few clicks instead of messing around with system packages, config files and systemd.
I made a misstake in a previous comment where I mentioned that Synology was a US company, it is not, I’ll update my comment later.
My main issue with Synology is that at this point I want to run something I know is fully offline, I don’t want it reaching out to company servers, I just want a semi-dumb box I can throw my data in.
I am also concerned about Synology’s raid implementation, sure, it cool that you can add new drives as your need grows, but I worry about a raid being unevenly worn.
I mean, you have a Synology raid setup with three drives, you work off of it constantly, and after three years you add another drive and expand the storage but the old drives are still there, with three years of work on the m counting down to when they die, breaking the raid entierly…
Damn, looking back at what I wrote, it is clearly just my anxiety playing up…
I guess im confused at the worry about raid…
The point of raid is speed, but also redundancy. And the odds of 2 drives out of 3 failing at the exact same time is super small…
Synologies raid aray can be set up so that if one drive fails, you can replace it with a new drive and carry on. The array is shared in a way that one bad drive doesnt break the entire system. Sure, your 30tb is functionally less, but your data is safe outside of flooding or a house fire. Thats how i understood it.
maybe im wrong. Idk.
I’m a big fan of my Synology NAS. It solved the problem I needed it to solve quickly and securely. And now that I have a solid backup system in place, I’ve been building out my own locally hosted services in my own time, stress free. It’s a good safety net that way.
I bought a Synology years ago and it served me well. I bought a newer model that was smaller, two drives.
I like that I don’t need to think about it. It has a simple offsite cloud backup that’s pretty cheap, or you can set up your own. It supports docker and the software packages it supports are good enough for me.
Are the newer models faster? I always found the UI slow, but my nas is probably a decade old at this point.
The UI isn’t a priority item most of the time. The services tor NAS provides are.
I mostly use Photos Mobile, Drive. These are asynchronous applications so responsiveness isn’t really a consideration.
Smb/nfs were always as fast as the drives for me, even in their much higher end enterprise equipment, although I had 8Gb or more of memory in those.
I have been shillyshallying between running Synology or building my own machine, and right now am probably going for what I habe experience with.
Cloud backup sounds cool, but my current thinking is that my ideal system would have two NAS machines, the primary is the one I access in my day to day, and then I run a borg-backup or rsync between them every night to get all changes, over time I could get a few external drives and run my backups over sneakernet to my parents, low update frequency, sure, but would be a simple way to do it.
I do currently have an old Intel NAS that I have had for 10+ years, it was used and had used disks when I got it, as a cold backup.
For me the math just didn’t work out. Maintaining a 2nd nas offsite wasn’t worth it, their cloud service was cheaper.
It’s Linux under the hood.
Since the cpus are weak (compared to desktop/laptop hardware) they barely use any energy.
Generally desktop hardware is surprisingly power efficient, especially with lower-midrange components. Right now my home server is running on an ewaste HP Elitedesk.
For software, I’d really go for a config that uses ZFS over EXT4 for the data storage. ZFS is so battle-tested that anything you might find you want or need to fix or change, someone else has already documented the same situation multiple times over. Personally I went with a config like Apalrd’s with using proxmox for a stable host OS with good management and to create the zfs pool, then a container running cockpit for creating and managing the shares.
Currently that server has a 800GB Intel Datacenter SSD for boot and VM storage, and 2x 4TB HDDs in a ZFS mirror for NAS storage, an with a i5-4590 it’s running 6 Minecraft servers via Crafty Controller, Jellyfin, the Samba shares and I’ve spun up other random servers and VMs as desired/needed without trouble. Basically all of the services which run 24/7 are in LXCs because running Debian VMs on my Debian host seems too redundant for my tastes.
ZFS is damned cool, but it is something I have limited experience with, at the moment I just want something I am familiar with to get something set up.
I will probably get a lab machine with in three years to so to learn more about how to deal with ZFS and TrueNAS over time before I feel comfortable running it myself.
You could use an OS like Unraid that handles ZFS for you. You don’t really need to know how ZFS works if you use Unraid since it’s all set up through the web UI. You can always search for how to do things if needed :)
ZFS has bitrot protection which is very useful for important files. Whenever you save a file, it computes a checksum for that file and stores it in the file table. When you read a file, it can detect if that file is corrupted on the drive it’s reading from (the checksum won’t match) and it’ll silently / automatically repair it using data from a different drive.
AFAIK none of the other file systems support this. You need to use ZFS RAID rather than mdadm RAID for it to work.
Sounds like I need to get a another computer in addition to my NAS to do some testing with.
To be honest I am not 100% locked in on MDADM + LVM and EXT4, I am used to LVMs and have a decent understanding of them, and basically no understanding of ZFS.
From what I can see bit rot is not a huge problems for home users, I’ll add that to the plus side of using ZFS, and decide what I will do when I get the hardware.
I’m the same as you - I had experience with mdadm, LVM, LUKS, and ext4, but no experience with ZFS. I still don’t know a lot about ZFS, but Unraid set it up for me, and I can always Googl4/DuckDuckGo any issues I encounter.
From what I can see bit rot is not a huge problems for home users
The thing is that it’s likely that lots of people are affected by bitrot and just don’t know it, since there’s no way to detect it without using checksums. People don’t know that their files have succumbed to bitrot until they try to use them and realise they’re corrupted.
Instead of 4 x 6TB drives, consider 2 x 14TB or even 2 x 20TB in a ZFS mirror. Buy the biggest drives you can afford that have reasonable pricing. When I was buying drives two years ago, 16 - 20GB was the sweet spot for price per GB.
Make sure you use NAS drives. Western Digital has had several controversies so I usually go for Seagate Exos instead.
This is my first NAS, and at the moment I am looking at the 6TB Seagate IronWolf Pro NT drives.
I have been using Seagate for as long as I built my own computers, I do remember the bad old days when I for months ran a computer with a Maxtor drive that kept headcrashing unless you slapped the machine as you pressed the power button, since then I have been on team Seagate.