I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.
This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.
It was something around 40 TB X2 . We were doing a terrain analysis of the entire Earth. Every morning for 25 days I would install two fresh drives in the cluster doing the data crunching and migrate the filled drives to our file server rack.
The drives were about 80% full and our primary server was mirrored to two other 50 drive servers. At the end of the month the two servers were then shipped to customer locations.
In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.
Don’t stay in school, kids.
You should have said no to math, it’s a helluva drug
golden 😂😂
Do cloud platform storage operations count? If so, in the hundreds of terabytes (work)
Why would dd have a limit on the amount of data it can copy, afaik dd doesn’t check not does anything fancy, if it can copy one bit it can copy infinite.
Even if it did any sort of validation, if it can do anything larger than RAM it needs to be able to do it in chunks.
Not looking at the man page, but I expect you can limit it if you want and the parser for the parameter knows about these names. If it were me it’d be one parser for byte size values and it’d work for chunk size and limit and sync interval and whatever else dd does.
Also probably limited by the size of the number tracking. I think dd reports the number of bytes copied at the end even in unlimited mode.
Well they do nickname it disk destroyer, so if it was unlimited and someone messed it up, it could delete the entire simulation that we live in. So its for our own good really.
No, it can’t copy infinite bits, because it has to store the current address somewhere. If they implement unbounded integers for this, they are still limited by your RAM, as that number can’t infinitely grow without infinite memory.
I’m currently backing up my /dev folder to my unlimited cloud storage. The backup of the file
/dev/random
is running since two weeks.Cool, so I learned something new today. Don’t run
cat /dev/random
Why not try /dev/urandom?
😹
Ya know, if not for the other person’s comment, I might have been gullible enough to try this…
That’s silly. You should compress it before uploading.
I’m guessing this is a joke, right?
/dev/random and other “files” in /dev are not really files, they are interfaces which van be used to interact with virtual or hardware devices. /dev/random spits out cryptographically secure random data. Another example is /dev/zero, which spits out only zero bytes.
Both are infinite.
Not all “files” in /dev are infinite, for example hard drives can (depending on which technology they use) be accessed under /dev/sda /dev/sdb and so on.
No wonder. That file is super slow to transfer for some reason. but wait till you get to /dev/urandom. That file hat TBs to transfer at whatever pipe you can throw at it…
I’ve imaged an entire 128GB SSD to my NAS…
A few years back I worked at a home. They organised the whole data structure but needed to move to another Providor. I and my colleagues moved roughly just about 15.4 TB. I don’t know how long it took because honestly we didn’t have much to do when the data was moving so we just used the downtime for some nerd time. Nerd time in the sense that we just started gaming and doing a mini LAN party with our Raspberry and banana pi’s.
Surprisingly the data contained information of lots of long dead people which is quiet scary because it wasn’t being deleted.
No idea about which specific type of business it is, but keeping that history long term can have some benefits, especially to outside people. Some government agencies require companies to keep records for a certain number of years. It could also help out in legal investigations many years in the future and show any auditors you keep good records. From a historical perspective, it can be matched to census, birth, and death certificates. A lot of generational history gets lost.
Companies also just hoard data. Never know what will be useful later. shrug
Manually transferred about 7TBs to my new Rpi4 powered NAS. It took a couple of days because I was lazy and transferred 15 GBs at a time which slowed down the speed for some reason. It could handle small sub 1 GB files in half a minute otherwise.
Could the slowdown be down to HDDs that cache on a section of - I think it’s single layer? - and slowly rewrite that cache onto the denser (compound layer?) storage?
Upgraded a NAS for the office. It was reaching capacity, so we replaced it. Transfer was maybe 30 TB. Just used rsync. That local transfer was relatively fast. What took longer was for the NAS to replicate itself with its mirror located in a DC on the other side of the country.
Yeah it’s kind of wild how fast (and stable) rsync is, especially when you grew up with the extremely temperamental Windows copying thing, which I’ve seen fuck up a 50mb transfer before.
The biggest one I’ve done in one shot with rsync was only about 1tb, but I was braced for it to take half a day and cause all sorts of trouble. But no, it just sent it across perfectly first time, way faster than I was expecting.
Never dealt with windows. rsync just makes sense. I especially like that its idempotent, so I can just run it twice or three times and it’ll be near instant on the subsequent run.
Yeah, shout out for rsync also. It’s awesome. Combine it with ssh & it feels pretty secure too.
My Chia crypto farm at its peak had about 1.5 PB of plots, each plot was I think about 100ish gigs? I’d plot them on a dedicated machine and then move them to storage for farming. I think I’d move around 10TB per night.
It was done with a combination of powershell and bash scripts on Windows, Linux, and the built in Windows Services for Linux.
When I was in highschool we toured the local EPA office. They had the most data I’ve ever seen accessible in person. Im going to guess how much.
It was a dome with a robot arm that spun around and grabbed tapes. It was 2000 so I’m guessing 100gb per tape. But my memory on the shape of the tapes isn’t good.
Looks like tapes were four inches tall. Let’s found up to six inches for housing and easier math. The dome was taller than me. Let’s go with 14 shelves.
Let’s guess a six foot shelf diameter. So, like 20 feet circumference. Tapes were maybe .8 inches a pop. With space between for robot fingers and stuff, let’s guess 240 tapes per shelf.
That comes out to about 300 terabytes. Oh. That isn’t that much these days. I mean, it’s a lot. But these days you could easily get that in spinning disks. No robot arm seek time. But with modern hardware it’d be 60 petabytes.
I’m not sure how you’d transfer it these days. A truck, presumably. But you’d probably want to transfer a copy rather than disassemble it. That sounds slow too.
This was your local EPA? Do you mean at the state level (often referred to as “DEP”)? Or is this the federal EPA?
Because that seems like quite the expense in 2000, and I can’t imagine my state’s DEP ever shelling out that kind of cash for it. Even nowadays.
Sounds cool though.
Tape robots are fun, but tape isn’t as popular today.
Yes, it’s a truck. It’s always been a truck, as the bandwidth is insane.
If modern LTO drives weren’t so darn expensive…
80GB, it was 8 hours of (supposedly) 4k content in the MP4 format. https://www.youtube.com/watch?v=VF5JWdaJlvc Here’s the link (hoping for the piped bot to appear).
When I was moving from a Windows NAS (God, fuck windows and its permissions management) on an old laptop to a Linux NAS I had to copy about 10TB from some drives to some other drives so I could re-format the drives as a Linux friendly format, then copy the data back to the original drives.
I was also doing all of this via terminal, so I had to learn how to copy in the background, then write a script to check and display the progress every few seconds. I’m shocked I didn’t loose any data to be completely honest. Doing shit like that makes me marvel at modern GUIs.
Took about 3 days in copying files alone. When combined with all the other NAS setup stuff, ended up taking me about a week just in waiting for stuff to happen.
I cannot reiterate enough how fucking difficult it was to set up the Windows NAS vs the Ubuntu Server NAS. I had constant issues with permissions on the Windows NAS. I’ve had about 1 issue in 4 months on the Linux NAS, and it was much more easily solved.
The reason the laptop wasn’t a Linux NAS is due to my existing Plex server instance. It’s always been on Windows and I haven’t yet had a chance to try to migrate it to Linux. Some day I’ll get around to it, but if it ain’t broke… Now the laptop is just a dedicated Plex server and serves files from the NAS instead of local. It has much better hardware than my NAS, otherwise the NAS would be the Plex server.
so I had to learn how to copy in the background, then write a script to check and display the progress every few seconds
I hope you learned about terminal multiplexers in the meantime… They make your life much easier in cases like this.
30 years with Linux and I know I still haven’t. Maybe this year? :-D
Multiple TB when setting up a new server to mirror an existing one. (Did an initial copy with both together in the same room, before moving the clone to a physically separate location. Doing that initial copy would saturate the network connection for a week or more otherwise)
My cousin once stuffed an ISO through my mail server. His connection up in Bella Bella restricted non-batched comms back then, so he jammed it through the server as email to get on the batched quota.
It took the data and passed it along without error, albeit with some constipation!