Popular Post bluesman Posted April 17, 2021 Popular Post Share Posted April 17, 2021 Nice!!! Beautifully written and well thought out, this is essential knowledge for any audiophile with digital library content. I went to cloud backup after I accidentally deleted about 500 photos that I shot between weekly incremental backups of my NAS drives (swapping with the prior week’s drives in my safe deposit box). I use a synch storage service. With my meager 5 mbps upload speed at the time, it took about a week to backup my photo and music files. But, like most of life, patience is a virtue with vast rewards. Once the files are on the cloud server, incremental updates are fast. And the service I chose about 10 years ago (Livedrive) has been outstanding, from customer service to easy file availability - and at a very fair price. Versioning is “limited” to the 30 most recent, which is fine with me because I’ve never needed or used it. I solved the rapid recovery issue with local backup on an SSD, along with RAID mirroring on my NAS for continuity. This scheme is structured along the hierarchy of probabilities. One HD in a NAS is more likely to suffer premature death than both. And because accession is limited to incremental B/U, the local SSD is less likely to fail than the NAS drives from which I play most of my music and store & edit my photos (of which I now have about 100,000, many of which are in a RAW format). I hope none of you ever knows the dread that squeezes your heart when you realize you’ve permanently lost files you really wanted and can never get again. Once was enough for me! So thanks for a critical wake up call, Dan - I hope everyone takes your advice to heart and uses the information you provided. PS: Be proactive to minimize loss. Check your drives with whatever monitoring utility you like. Replace them when the first indicator of impending failure pops up. Monitor drive temps along with processor temps, and check overall computer performance regularly. Run benchmark tests. If R/W speeds drop, get on it ASAP. If a baseline or operating temp goes up, check it out. Vacuum and clean all fans, ducts, and internal surfaces. If that doesn’t bring down the elevation, find out what’s wrong and fix or replace the offender(s) before a failure. kumakuma and Dan Gravell 1 1 Link to comment
Popular Post bluesman Posted April 19, 2021 Popular Post Share Posted April 19, 2021 5 hours ago, Dan Gravell said: If bit-rot is manifest as a change in the byte stream for any given file then yes, it could be transferred. This is precisely why snapshotting, as supported by the backup services (as opposed to most of the more generic and music focused services) is important. If you can find the last time the file(s) were good, you can roll back to that time. There may be many files affected. As I understand it, bit rot is largely a problem in large scale, multilayer storage media. The risk of a discrepancy between what was written and what is read back (which is the functional definition of bit rot) is essentially zero from the HDDs and SSDs we use in consumer electronics. The risk is greater (but still low) when the storage container is in the high TB to PB range, because so many layers have to be navigated to get to the target sectors. If a read is corrupt, it will be corrupt when downloaded from cloud storage (where the capacity issue is very real). The way to guard against this is twofold. Keep a current local copy of your files, so you can replace corrupt archives, and keep a record of checksums so you can validate and verify the integrity of your files. Storage systems with a file management layer can now check files automatically by comparing the checksum on upload to the checksum on accession. I don’t know of any systems yet that will version and automatically retrieve the last known good version - but there probably are some and more will be coming as cloud storage capacity increases further. How detailed you get is up to you. Every file can be summed, or you can do it at the album level etc all the way up to a full disc or even an entire data transfer. But the larger the bin, the harder it will be to find the affected bit(s). So the practical alternative is to wait for files that throw errors and simply replace them ad hoc from your local backup. That’s what I do, FWIW. AudioDoctor and Dan Gravell 2 Link to comment
bluesman Posted April 19, 2021 Share Posted April 19, 2021 21 minutes ago, AudioDoctor said: I have versioned backups on my NAS, and those are backed up to an identical off-site NAS. Would bit-rot become evident when in the course of snapshotting, files that I have not recently changed start transferring to the NAS as changed files? If your NAS units are all running simple storage, any corruption on the main NAS will be backed up off-site exactly as read. If you're using automatic backup software that updates the off-site unit every time the main one changes, corruption will immediately trigger an update of the off-site unit to reflect the change (whether good or bad). I suspect (but don't know for certain) that if your disks have proper error correction, bit corruption could trigger a remote update but autocorrection would do so again and restore the file to its correct form. It probably depends on how fast your system is and what triggers the read that detects the error and fixes it. I've never seen data on these processes, so I can't speculate on how reliable they are. A simple RAID mirror will also see corrupt bits as a change and make the same change in the mirror disc. As I recall, only RAID 6 will protect against "rebuilding" a good drive with corrupt data from the affected one. But there are a few basics to keep in mind: First, data decay is not very likely in the relatively small HDDS used for home NAS until they start closing in on their end of life. Use SMART data to check on your drives, and replace them at the first sign of error correction. I replace HDDs every 5 years even if they're working because the risk of data decay starts going up as they age. Second, modern drives and the software that controls them use error correction code to identify flipped bits and remediate data decay. Your SMART drive monitoring software will show you the rate of corrected errors - if that rate is too high (per the maunfacturer's instructions) or it's going up each time you check it over 3 consecutive times, replace the disk. You can check your discs with utilities built into your OS or use 3rd party software like CrystalDiskInfo. Scan your disks regularly and hope for this: Third, SSDs have a different cause of data decay. The insulating layer that keeps charged electrons where they belong degrades over time, and the bits flip. So you need to follow the same replacement schedule for SSDs that you do for HDDs. They both have finite data integrity periods, but for different reasons. Fourth, heat accelerates this and most other memory, storage, and performance problems. Run performance benchmarks every few months to be sure your computer isn't slowing down from undetected problems. Make sure your computers, NAS units, etc are all in well ventilated spaces with good air flow around them. Dust is a killer because it educes heat transfer to air - vacuum it all off of and out of everything. Keep all fans clean and make sure they're functioning properly. Monitor your drive temps just as you do (or should do) with your CPU and GPU temps. Use monitoring software that will alert you to potential problems. Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now