Jump to content
IGNORED

Peanuts to Storage Space cost?


Recommended Posts

Agree, but that's got nothing to do with hot redundancy (which, more specifically, is what I am raising as not-really-required).

 

I'm just raising the possibility that maybe high availability is not required for a home media setup and if there are concerns about storage cost that's one thing to look at.

 

Actually it's more about data integrity than high availability. By having two copies of each block, the system can compare CRC and self heal. Moreover firing CRC errors is often the first warning sign of impending drive failure.

 

If you aren't doing RAID/mirroring then you'd have no way of knowing when your drive is silently getting corrupted. Backing up a corrupted file is not a good thing.

 

Of course if your data isn't important then fine save a few bucks. My family photos are important.

Custom room treatments for headphone users.

Link to comment
Actually it's more about data integrity than high availability. By having two copies of each block, the system can compare CRC and self heal. Moreover firing CRC errors is often the first warning sign of impending drive failure.

 

If you aren't doing RAID/mirroring then you'd have no way of knowing when your drive is silently getting corrupted. Backing up a corrupted file is not a good thing.

 

Of course if your data isn't important then fine save a few bucks. My family photos are important.

 

You are talking about ZFS here though right?

 

Hardware raid will not solve all silent corruptions particularly in case of software induced errors (such as bad software and viruses) - but if you are using ZFS you already know this [emoji3]

Link to comment
You are talking about ZFS here though right?

 

Hardware raid will not solve all silent corruptions particularly in case of software induced errors (such as bad software and viruses) - but if you are using ZFS you already know this [emoji3]

 

Yes. I've been using ZFS for 10 years. Seen many platters come and go. I will go as far as to say that there is no reason for hardware RAID to exist nowadays. The first thing I do with hardware RAID is disable -- flash to "IT" mode.

Custom room treatments for headphone users.

Link to comment
Your experience with drive failure does not equate with my experience. Typically drives fail regularly within two years of use. To date, the HGST 24 x7 drives are very good, three years and they are still running ok.

 

Fair enough, with that frequency of failure I can see why you might want to avoid repeated downtime.

 

bliss - fully automated music organizer. Read the music library management blog.

Link to comment
Actually it's more about data integrity than high availability. By having two copies of each block, the system can compare CRC and self heal. Moreover firing CRC errors is often the first warning sign of impending drive failure.
What you are talking about is data integrity, but not what I'm talking about!

 

I'm talking about avoiding dumb mirroring and thereby saving storage cost.

 

You are talking about integrity which is fine but nothing to do with RAID per se. It's just something you can achieve in a HA environment by using RAID.

 

If you aren't doing RAID/mirroring then you'd have no way of knowing when your drive is silently getting corrupted. Backing up a corrupted file is not a good thing.

 

Of course if your data isn't important then fine save a few bucks. My family photos are important.

Again, you're talking about hot standby. Integrity is possible without mirroring, it's just that you have to go to a colder copy (backups which, because you have CRC, are "correct").

 

My family photos are also important but I don't need 24/7 access to them with five nines availability (two nines would do, that's three days downtime a year).

 

bliss - fully automated music organizer. Read the music library management blog.

Link to comment

Fair enough, I stated this too strongly. There certainly are ways of ensuring data integrity without RAID (really not generic RAID but ZFS/BTRFS)

 

It's just that doing this requires some thought and extra work and ZFS does this all almost automatically. So for me well worth the extra storage cost.

Custom room treatments for headphone users.

Link to comment
Fair enough, I stated this too strongly. There certainly are ways of ensuring data integrity without RAID (really not generic RAID but ZFS/BTRFS)

 

It's just that doing this requires some thought and extra work and ZFS does this all almost automatically. So for me well worth the extra storage cost.

As I recall, if you use ZFS (or BTRFS) with a single drive you can get notification of bit-rot; but you need RAID1 (mirroring) to be able to automatically correct for it?

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment

The ZFS is a bit foreign although I could give it a shot. The appeal if robbby's suggestion to use a software array on either platform is also a great idea, but I need to get to a better area to download the data.

 

Either way, I'm stuck with building/buying a computer just to store files wheter it has RAID or not. The NAS route is not necessarily cheaper, and the performance of throughput and file saving time for a NAS (just basing it on the QNAP 419 in existence) is nothing stellar.

 

Rebuilding an array in ZFS could also take as long as the NAS approach (about two days for a 9TB JBOD array). Still none the wiser, but the input here is very good.

 

Still mulling.

AS Profile Equipment List        Say NO to MQA

Link to comment
The ZFS is a bit foreign although I could give it a shot. The appeal if robbby's suggestion to use a software array on either platform is also a great idea, but I need to get to a better area to download the data.

 

Either way, I'm stuck with building/buying a computer just to store files wheter it has RAID or not. The NAS route is not necessarily cheaper, and the performance of throughput and file saving time for a NAS (just basing it on the QNAP 419 in existence) is nothing stellar.

 

Rebuilding an array in ZFS could also take as long as the NAS approach (about two days for a 9TB JBOD array). Still none the wiser, but the input here is very good.

 

Still mulling.

Just as an observation on "home built server vs NAS" ... a modern NAS is pretty much just a dedicated Linux (or perhaps BSD) computer. The RAID is usually just the same software RAID - the difference is the user interface offered and backup/support. You also get things like hot swappable drives in a NAS and a neater case.

 

One option is NAS "distributions" like FreeNAS (FreeBSD with ZFS support) or Rockstor (Linux with BTRFS). Time for rebuilding an array will be down to processor speeds and available memory ... the advantage is with something like FreeNAS you can put better processor capabilities in. Something like HP's ProLiant MicroServer can be a good starting point for upto 4 disks - you can even upgrade to a full Xeon processor for added processing capability...

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
The ZFS is a bit foreign although I could give it a shot. The appeal if robbby's suggestion to use a software array on either platform is also a great idea, but I need to get to a better area to download the data.

.

 

With Flexraid (and for ex SnapRaid) you can start with filled disks, so you don't need to start with a large 9tb empty array. You can start with your current disks, add a parity disk, then move your data to larger disks as needed.

 

Here's a comparison between some of the options.

http://www.snapraid.it/compare.html

Link to comment
Just as an observation on "home built server vs NAS" ... a modern NAS is pretty much just a dedicated Linux (or perhaps BSD) computer. The RAID is usually just the same software RAID - the difference is the user interface offered and backup/support. You also get things like hot swappable drives in a NAS and a neater case.

 

One option is NAS "distributions" like FreeNAS (FreeBSD with ZFS support) or Rockstor (Linux with BTRFS). Time for rebuilding an array will be down to processor speeds and available memory ... the advantage is with something like FreeNAS you can put better processor capabilities in. Something like HP's ProLiant MicroServer can be a good starting point for upto 4 disks - you can even upgrade to a full Xeon processor for added processing capability...

 

Entirely agree

Custom room treatments for headphone users.

Link to comment

Looks like there are already quite a few suggestions here but I'll throw my hat into the mix with what I would do.

 

Purchase a X 2 Bay Synology NAS with no drives in it

Purchase X 2 - 4TB 7200RPM SAS drives

NAS will be configured in a RAID 1 Software Mirror (Garbage but better then nothing)

Usable Space will be 4TB

 

Next Up

 

I would then also purchase X 2 ElCheapo, Single Disk NAS devices for as cheap as you can find them. Something also 4TB in size.

 

I would then setup a schedule to "Sync" the Primary NAS with one of the ELCheapo NAS devices on some interval of your choosing (once a month or once every 6mo..etc).

 

By using X 2 Elcheapo NAS devices in a rolling "Sync" type schedule it will afford you the ability to have 2 different versions of your library. This will help combat against data corruption/Bit-Rot setting in while your unaware and so you don't then "Sync" the corruption over to the backup copy and cause an outbreak. This of course assumes you catch any corruption before you end up completing a rolling cycle of Syncing both of your backup drives.

 

One thing folks should realize and that is that RAID will not protect against corruption. RAID will copy any corruption over to the other drive due to the nature of how it works. The only way to protect yourself against corruption is having multiple copies of your data at different points in time. Again, this only applies under the assumption that you know corruption exists and that you don’t overwrite all your backups before realizing it

Link to comment
... One thing folks should realize and that is that RAID will not protect against corruption. RAID will copy any corruption over to the other drive due to the nature of how it works. The only way to protect yourself against corruption is having multiple copies of your data at different points in time. Again, this only applies under the assumption that you know corruption exists and that you don’t overwrite all your backups before realizing it

 

Have you read this thread? Lot's of discussion about how newer software RAID tech (BTRFS, ZFS, Windows ReFS w/Storage Spaces) all have self-healing capabilities that are based on the RAID configurations.

Link to comment
Have you read this thread? Lot's of discussion about how newer software RAID tech (BTRFS, ZFS, Windows ReFS w/Storage Spaces) all have self-healing capabilities that are based on the RAID configurations.

 

No I haven't read the thread, I just like commenting on topics for shits and giggles.

 

All kidding aside, my statement is based on real world personal experience as a storage engineer for large enterprises. What is your real world experience with this "self healing" capability you speak of? Have you seen it in action? Please let us know the details of how it turned out.

 

There is a reason why real corporations don't use this Mom and Pop software RAID found as Freeware on the Interwebs. Lets use our brains here and think about it!

 

If it does all that you believe then why wouldn't companies be jumping ship in mass to save millions of dollars to use your version of RAID instead?

Link to comment
No I haven't read the thread, I just like commenting on topics for shits and giggles.

 

All kidding aside, my statement is based on real world personal experience as a storage engineer for large enterprises. What is your real world experience with this "self healing" capability you speak of? Have you seen it in action? Please let us know the details of how it turned out.

 

There is a reason why real corporations don't use this Mom and Pop software RAID found as Freeware on the Interwebs. Lets use our brains here and think about it!

 

If it does all that you believe then why wouldn't companies be jumping ship in mass to save millions of dollars to use your version of RAID instead?

Wow; thanks for that beautifully curmudgeonly ad hominem.

 

Guess Sun, Oracle, and Microsoft have had nothing to do with providing real world technology to "real corporations" then.

Link to comment

By using X 2 Elcheapo NAS devices in a rolling "Sync" type schedule it will afford you the ability to have 2 different versions of your library. This will help combat against data corruption/Bit-Rot setting in while your unaware and so you don't then "Sync" the corruption over to the backup copy and cause an outbreak. This of course assumes you catch any corruption before you end up completing a rolling cycle of Syncing both of your backup drives.

 

One thing folks should realize and that is that RAID will not protect against corruption. RAID will copy any corruption over to the other drive due to the nature of how it works. The only way to protect yourself against corruption is having multiple copies of your data at different points in time. Again, this only applies under the assumption that you know corruption exists and that you don’t overwrite all your backups before realizing it

 

You are correct that generic 'RAID' and certainly hardware RAID does not necessarily protect against this type of corruption. ZFS protects against this exact type of corruption and this is well described. By taking snapshots, one can protect against overwriting good copies of data with corrupted copies.

Custom room treatments for headphone users.

Link to comment

That said, "rolling sync" is a good idea which I use.

 

A simple explanation of the way ZFS protects against "bit rot" is by storing checksum of each page. When accessing a mirrored page, the checksums are compared and a bad checksum indicates bit rot on that page. It can then be "scrubbed" by copying the known good copy over the bad copy. One can be more compulsive with 3 way mirrors. As long as scrubs are done frequently enough, it is statistically unlikely to get an error on both pages and the error rates are known and published.

 

I have indeed seen this work in action over the last 10 years that I have been using ZFS.

 

Just because it is freely available does not mean that it isn't valuable. In any case if you have a better system which you are prepared to recommend to the group, please tell us! Again, beyond the rolling sync which I also use (actually NAS -> NAS -> archived disk)

Custom room treatments for headphone users.

Link to comment
Just as an observation on "home built server vs NAS" ... a modern NAS is pretty much just a dedicated Linux (or perhaps BSD) computer. The RAID is usually just the same software RAID - the difference is the user interface offered and backup/support. You also get things like hot swappable drives in a NAS and a neater case.

 

One option is NAS "distributions" like FreeNAS (FreeBSD with ZFS support) or Rockstor (Linux with BTRFS). Time for rebuilding an array will be down to processor speeds and available memory ... the advantage is with something like FreeNAS you can put better processor capabilities in. Something like HP's ProLiant MicroServer can be a good starting point for upto 4 disks - you can even upgrade to a full Xeon processor for added processing capability...

 

The response time and throughput of an Atom based NAS has a high priority of being removed. The Xeon processor, even an E3 is streets ahead of an atom, so the HP microserver is a step in the right direction. There's some half a$$ed web page misdirections at the moment, finding anything on the HP pages on Gen 8 microservers is difficult, ends up no where near any info other than support pages rather than product information.

HP are splitting up the business to Enterprise incorporating servers/workstations and everything else into another business, I gather they don't even know where to split their web pages either by looks of it, so may have to rely on vendor pages.

AS Profile Equipment List        Say NO to MQA

Link to comment
Looks like there are already quite a few suggestions here but I'll throw my hat into the mix with what I would do.

Your suggestion is a reasonable one, but yet again a Synology (or QNAP or Thecus, etc) NAS is just a (usually underpowered) Linux or BSD computer in a pretty box. There is nothing special about their RAID (certainly at the 2-4 drive enclosure level) and they run the same software RAID and Linux Volume Manager type systems as you get from a normal Linux install. What you do get from Synology, etc. is the support and web interface / management tools.

 

If you have a little DIY computer nouse you would do better buying a Atom (or even basic Xeon) motherboard, memory and a case and adding OpenSUSE (supports BRTFS out of the box) or another Linux or something like RockStor or FreeBSD. If you don't want to actually build the physical hardware, then something like HP Microserver (or even Acer Revo One RL85) makes a great basis for a 2-4 drive Linux "NAS".

 

By using X 2 Elcheapo NAS devices in a rolling "Sync" type schedule it will afford you the ability to have 2 different versions of your library. This will help combat against data corruption/Bit-Rot setting in while your unaware and so you don't then "Sync" the corruption over to the backup copy and cause an outbreak. This of course assumes you catch any corruption before you end up completing a rolling cycle of Syncing both of your backup drives.

Of course if you run BTRFS, ZFS or similar rather than old fashioned RAID you can detect and usually correct BitRot.

 

One thing folks should realize and that is that RAID will not protect against corruption. RAID will copy any corruption over to the other drive due to the nature of how it works. The only way to protect yourself against corruption is having multiple copies of your data at different points in time. Again, this only applies under the assumption that you know corruption exists and that you don’t overwrite all your backups before realizing it

The biggest reason for having backup is the risk of user error. Inadvertently deleting the wrong file, corrupting your metadata or (worst case) deleting everything off a partition or formatting the partition, etc.

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
You are correct that generic 'RAID' and certainly hardware RAID does not necessarily protect against this type of corruption. ZFS protects against this exact type of corruption and this is well described. By taking snapshots, one can protect against overwriting good copies of data with corrupted copies.

 

So does this mean you have personally witnessed ZFS fix a corrupted file in your music library?

 

Snapshots can be a dangerous option if your intention is to try and run from said Snapshot for long periods of time. If Bit-Rot occurs in the Snapshot you usually need to be willing to discard whatever changes have been made to the Snapshot since it was taken. Unless of course ZFS has the ability to allow you to pick and choose which files to discard and which ones to keep while still allowing you to commit everything else that hasn't fallen victim to Bit-Rot back to the base image. I've not seen such a capability but that’s not to say one doesn't exist.

Link to comment
Your suggestion is a reasonable one, but yet again a Synology (or QNAP or Thecus, etc) NAS is just a (usually underpowered) Linux or BSD computer in a pretty box. There is nothing special about their RAID (certainly at the 2-4 drive enclosure level) and they run the same software RAID and Linux Volume Manager type systems as you get from a normal Linux install. What you do get from Synology, etc. is the support and web interface / management tools.

 

I have no such issues with the Synology I am using which obtains copy speeds from my local SSD drive over a 1gb link to the NAS at rather impressive transfer rates.

 

I wont argue that the software RAID the majority of NAS's utilize is garbage. But for a home user its very cost prohibitive to purchase are real hardware RAID solution for the home that also has enough capacity. Were talking about adding three zeros to the end of the price tag in most cases.

Link to comment
Wow; thanks for that beautifully curmudgeonly ad hominem.

 

Guess Sun, Oracle, and Microsoft have had nothing to do with providing real world technology to "real corporations" then.

 

I have no idea what your attempting to say in your statement above. What do those companies have to do with the topic at hand or anything I have said unless your referring to their wonderful contributions to the advancements of software RAID which they so generously include built into the OS. I can't think of anyone who would proudly admit to trusting it with their important data over hardware RAID.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...