Jump to content
IGNORED

Peanuts to Storage Space cost?


Recommended Posts

I read a few posts like this one, and certainly not singling out this post at CA in particualr, and I’m pondering am I working on a different method and far too deeply, or is everyone not realising all the problems associated with larger libraries and a place to store them?

 

 

Storage Size Goal

As a reference my library is 2TB 44,000 tracks. Learned the hard way, measure what you have now and double the space requirement, until next time. OK. This means a 4TB drive or an array of drives hopefully with a bonus of a bit more space and provide some redundancy.

 

Storage controllers

The current computer that stores the information is a ‘little’ old by now built in 2010. In the day, 1TB drives were large, 6TB unheard of. The RAID controllers in that computer are now obsolete and have final firmware revisions, so they won’t recognise larger drives like 6TB+.

So best that can be obtained for the LSI RAID array is 1TB drives, four of them (that’s the space available) in RAID 5 to make up a 3TB array. 3TB is a little short of 4TB target.

Now what. Larger drives required. A 3TB drive will not work when the controllers are rated for only 1TB. Windows (7) will recognise the drive’s correct size in a USB portable or docking station drive, but will come up short when used as a native SATA drive. Buy another PCIe controller card. Four port RAID cards are available, good names are $500-$1000, bonus to cater for 16 SATA Drives, handy but if there’s only space for four drives, what’s the point.

This allows for one drive failure. But when the drive is faulty, it’s usually in the future several years hence, would it be available in a few years? You MIGHT be lucky, especially for enterprise drives.

 

Costs

So the budget has now changed from one drive to four + a spare(s) + a new HBA/RAID Card. On current pricing for enterprise drives:

5 x HGST 6TB 3.5in 24x7 drives $370 each $1850.00 total.

1 x Controller card (average out at $700), Drives plus controller card is now $2550. This array will yield 12GB over the top for now what’s originally planned. OK, let’s try 3TB drives special now on Amazon for $122 each, nice. Still, 5 x $122 + $700 = $1310. Talking of spares will the controller card be available in the future, one day it will also fail, the MTBF is quite high on these, so it’s a risk assessment whether to buy a spare or not. If the array fails totally, then the data is lost. Always have a backup.

 

RAID is not a backup, so there’s another portable drive or a backup solution required as a minimum or another computer with a RAID array, task a BOAT, make it two BOATs (Break Out Another Thousand). Let’s hope a 6TB drive plus an enclosure that will actually last 18 months. I have yet to find a portable enclosure metal or plastic that will last that time, failing, but usually the drive is OK, but no guarantee. Tried Seagate, Fantom, WD, all have failed with a 50% rate for the WD and Seagate,25% for the Fantom when used 24x7. If the portable drives are only used SPARINGLY, you can get more life, but don’t count on them for continuous use and peace of mind reliability. I suppose I could have looked at the duty for portable drives, but I don’t seem to see any disclaimer about NOT using portable drives 24x7.

 

Different Solutions

 

Other alternatives are NAS of course, for four drives, the enclosure alone is something like $500, plus the drives + spare. This equates to a new controller PCI board in price, however a little more flexible with the extra software that NAS includes to make it look like a PC. More drives bays, the prices go up allowing for spare drives and a larger array, creeping to the $1000 mark and just slightly over, another BOAT.

 

Why not just use the main library drive from a portable, USB, Thunderbolt or Firewire port. Thunderbolt is quite expensive about double or triple the regular portable, does it increase reliability of the controller? No real guarantee, but let’s try USB drives. There is a slight downside to USB drives, I found when listening, a noticeable drop in quality from a USB portable drive and music from a SATA drive within the computer. I’d say it’s the PSU for the USB drive plus the transmission causing the loss of focus in the soundstage. Found similar with a NAS, that’s my perception, it might be OK for other installations. Usually another SMPS to deal with, trashing this for a larger linear supply and the (beat) costs go on, …...

In any case, a minimum of another drive as a backup, so the spend is at least 2 x. I have distrust and usually have a 3rd drive as a backup, 3 x. Alternatively if you can store in the cloud, but OMG, 2TB or more to upload takes weeks and is security THAT good? So that means plug and unplug drives to hopefully get high MTBF out of them, wear on the connectors notwithstanding.

 

In any event copy TB of data from one place to the other onto a NAS can take several days, and internal drives a few hours. Oh did I mention formatting an RAID 5 array on the now ancient QNAP 419 NAS took 48 hours to complete, about 5 seconds in a workstation for the same RAID 5?

 

So the next time the topic storage space costing peanuts, the cascading consequences of going to a larger drives is certainly not peanuts. I'm asking (and begging) is there another solution?

AS Profile Equipment List        Say NO to MQA

Link to comment

I advocate simply mirroring the drives using ZFS as a filesystem. Use FreeNAS or Ubuntu (install ZFS). You don't need a special "RAID" controller. ZFS is much much better than hardware RAID. For hard drives, SAS2 is really fine. There are LSI 9200 series controllers for $<100 which will handle 8 internal SAS drives. Get some RAM, and you are good to go.

 

I'm in the middle of a new NAS build (15 bay) which I'll post (really the only way to back up a 16Tb NAS is with another NAS :) well there's tape ...

Custom room treatments for headphone users.

Link to comment
I advocate simply mirroring the drives using ZFS as a filesystem. Use FreeNAS or Ubuntu (install ZFS). You don't need a special "RAID" controller. ZFS is much much better than hardware RAID. For hard drives, SAS2 is really fine. There are LSI 9200 series controllers for $<100 which will handle 8 internal SAS drives. Get some RAM, and you are good to go.

 

I'm in the middle of a new NAS build (15 bay) which I'll post (really the only way to back up a 16Tb NAS is with another NAS :) well there's tape ...

 

Thanks for the heads up on ZFS (which I had to look up since only familiar with NTFS). I like the idea that ZFS is created for data integrity which NTFS is not, the corrupt file is annunciated when you use it in NTFS, which is kind of dumb.

 

The HBA still needs to support the largest drive *IT* can use, rather than what's installed, right? I could do E-SATA with an external array, since the controller has a few ports spare. Where I live can be humid in the summer, and previous steel PC's have been rust affected, wonder if there's any similar designs in aluminium about?

 

Can ZFS read NTFS?

AS Profile Equipment List        Say NO to MQA

Link to comment

You could use a motherboard like this: ASRock E3C224D4I-14S Extended mini ITX Server Motherboard LGA 1150 Intel C224 DDR3 1600/1333 - Newegg.com along with a case like this: LIAN LI PC-Q26B Black Aluminum Computer Case - Newegg.com

 

Not "hot swap", but aluminum. I use aluminum hot swap bays like this: Amazon.com: iStarUSA BPN-DE350SS-RED 3x5.25 to 5x3.5 Trayless Red: Computers & Accessories but that involves more cost and building. All depends on your cost constraints.

 

Regarding drives, I like the Hitachi SAS (not SATA): HGST Ultrastar 7K4000 HUS724040ALS640 (0B26885) 4TB 7200 RPM 64MB Cache SAS 6Gb/s 3.5" Enterprise Hard Drive Bare Drive - Newegg.com and the drives are formatted as "ZFS", so just copy the old NTFS directories onto the new using "rsync" or using the file broweser, drag & drop ... which will use SMB.

 

What you do is basically create a zfs mirror device using 2 drives and then share it using SMB:

 

$ zpool create pool mirror /dev/sda /dev/sdb

$ zfs create pool/music

$ zfs set compression=on pool/music

$ zfs set sharesmb=name=music pool/music

 

then copy your music files in.

 

I think FreeNAS will help you do this with a GUI.

Custom room treatments for headphone users.

Link to comment

HI! I ended choosing a software solution. The name of the program is "Drivepool" by Stablebit and basically what it does is a virtual drive made of the free space of all the drives in your system that you want to participate in the pool (any speed, any kind, any capacity) and you can choose the redundancy method to prevent the faillure of one drive, two, etc. The program makes copy of the files in different drives in a really transparent way.

I'm not related with the developer in any way, just think that is a fantastic solution for a home server ($30).

I'm courious if any other member has used it.

 

Sorry for my English.

Franco from Argentina.

Link to comment
MM- I am finishing and writing up an alternative solution that you might find a good fit. Cost efficient, reliable, and to my ears, it sounds really really good. YMMV of course. I will try to write it up and post it today or tomorrow.

 

-Paul

 

I'd be keen to hear on the solution, thanks in advance there. Something where the PCIe bus extends to other drives is appealing, jabbr's suggestion of the SATA/SAS enclosure needs some thought.

AS Profile Equipment List        Say NO to MQA

Link to comment

Another solution and one which I have been using is a software alternative to raid called called raid over filesystem made by flexraid.

 

Data Protection & Recovery | FlexRAID

 

You can use any number of drives, all of varying sizes and create a single (or as many as you'd like), parity drive to backup your data. The nice thing about flexraid is data is not striped across all the drives so if one drive fails you still have access to all your other data and can rebuild a single drive, with raid you lose everything until you rebuild the failed drive.

 

Another plus is you can add a new drive to flexraid whenever you like, no need to worry about rebuilding an entire raid.

 

There is also the cost savings, no need for a fancy raid card, you can buy a regular pcie to sata card that is JBOD without costly raid implementation.

 

Edit - Here is a much better writeup comparing flexraid to hardware raid

Hardware RAID disadvantages and the advantages of FlexRAID - FlexRAID

Link to comment

I wasn't comparing it because of the file system, simply a recommendation from a cost and ease of use perspective. As far as I know, at least in the case of windows, it just uses the existing disks filesystem, so NTFS. Flexraid offers their RaidF and tRaid for linux/unix so i'm unaware if it works with the ZFS filesystem, though I imagine it would.

Link to comment
Storage Size Goal

As a reference my library is 2TB 44,000 tracks. Learned the hard way, measure what you have now and double the space requirement, until next time. OK. This means a 4TB drive or an array of drives hopefully with a bonus of a bit more space and provide some redundancy....

 

One of the beauties of newer next-gen file-systems is they can grow organically. I'm thinking in particular of Window ReFS + Storage Pools, and BTRFS. You can start with two drives, in say a mirrored config, and just add drives to the array as needed.

 

This is one major limitation, as I understand it, with a lot of other solutions, including ZFS.

 

I've had a basic two-disk Synology NAS for a couple of years, but am migrating to a) ReFS + Storage Pools mirrored arrays on my desktop, b) a BTRFS-based DIY NAS. That'll give me fault-tolerant, bitrot-immune, storage all around, with great flexibility, and low-cost.

Link to comment
I wasn't comparing it because of the file system, simply a recommendation from a cost and ease of use perspective. As far as I know, at least in the case of windows, it just uses the existing disks filesystem, so NTFS. Flexraid offers their RaidF and tRaid for linux/unix so i'm unaware if it works with the ZFS filesystem, though I imagine it would.

 

Windows Storage Spaces? I haven't used but understand that this is Window's software RAID. I like ZFS because it was implemented at Sun Microsystems for their own storage products -- and then open sourced. Interestingly BTRFS was also started at Sun... In any case ZFS has been around for 15 years and is well tested -- you can literally move the disc drives between Solaris and Linux and FreeBSD so the risk of a drive controller becoming obsolete is rather unlikely.

Custom room treatments for headphone users.

Link to comment

I'm not really convinced on the need for RAID in home applications. Why is such high availability really required? If I have a hard drive failure (happens once a decade to me on average) I just listen to the radio for a few days until the replacement arrives. If you are using a dumb RAID mirroring approach then that's halved your requirement already, right?

 

"Better" filesystems are, on the hand, always welcome.

 

bliss - fully automated music organizer. Read the music library management blog.

Link to comment
Windows Storage Spaces? I haven't used but understand that this is Window's software RAID. I like ZFS because it was implemented at Sun Microsystems for their own storage products -- and then open sourced. Interestingly BTRFS was also started at Sun... In any case ZFS has been around for 15 years and is well tested -- you can literally move the disc drives between Solaris and Linux and FreeBSD so the risk of a drive controller becoming obsolete is rather unlikely.

 

It is not windows storage spaces.

Link to comment
I'm not really convinced on the need for RAID in home applications. Why is such high availability really required? If I have a hard drive failure (happens once a decade to me on average) I just listen to the radio for a few days until the replacement arrives. If you are using a dumb RAID mirroring approach then that's halved your requirement already, right?

 

"Better" filesystems are, on the hand, always welcome.

 

A dumb RAID mirror still has the potential of the hardware controller to die and all data is lost. For the last 12 months, I 'mirrored' drives using Beyond Compare just using standard 'copy and paste'. If the drive failed, the other was still fine, if the whole hard disk controller died or motherboard fell over, I could remove the drive(s) and install in another computer and the files are OK, apart from some permissions issues.

As mentioned before, the HBA/RAID controller (two of them) have a limit of 1TB drive capacity. When a 3TB drive is connected, I lose data, so that's why I'm finding an alternative to expensive HBA while keeping the existing machine and increase the storage size on the same bus.

I don't want to discuss too far on benefits or disadvantages of RAID since there are holes in every storage methodology.

AS Profile Equipment List        Say NO to MQA

Link to comment
As mentioned before, the HBA/RAID controller (two of them) have a limit of 1TB drive capacity. When a 3TB drive is connected, I lose data, so that's why I'm finding an alternative to expensive HBA while keeping the existing machine and increase the storage size on the same bus.

If you go for using ZFS or similar on FreeBSD or Linux, you don't need a specific RAID HBA. Just a "simple", "dumb" controller.

 

One thing I didn't quite understand from your OP, why are you needing to go from 3TB currently, to around 24TB?

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
I'm not really convinced on the need for RAID in home applications. Why is such high availability really required? If I have a hard drive failure (happens once a decade to me on average) I just listen to the radio for a few days until the replacement arrives. If you are using a dumb RAID mirroring approach then that's halved your requirement already, right?

 

"Better" filesystems are, on the hand, always welcome.

I think that's a reasonable position too.

 

But you also lose, in that scenario, the self-healing capabilities you get in a RAID setup using ZFS, BTRFS, ReFS.

 

I'm a little paranoid, and disk drives are cheap.

Link to comment
If you go for using ZFS or similar on FreeBSD or Linux, you don't need a specific RAID HBA. Just a "simple", "dumb" controller.

 

One thing I didn't quite understand from your OP, why are you needing to go from 3TB currently, to around 24TB?

 

So if I use the standard SAS/SATA controller which I have now that can't see anything beyond 1TB, by using ZFS , it can recognise the larger drive? Like Windows, reports the right size, how does the OS overcome the physical limitations of the hard disk controller?

 

3TB in a RAID 5 is 9TB, 4 drives. The 5th drive is a physical spare. Where did the 24TB come from again?

AS Profile Equipment List        Say NO to MQA

Link to comment
So if I use the standard SAS/SATA controller which I have now that can't see anything beyond 1TB, by using ZFS , it can recognise the larger drive? Like Windows, reports the right size, how does the OS overcome the physical limitations of the hard disk controller?

If the OS recognises the correct disk size, then ZFS will utilise the whole space on the drive. Exactly what LSI HBA card you have will determine its capabilities. You may need to upgrade.

 

You might find https://forums.freenas.org/index.php?threads/confused-about-that-lsi-card-join-the-crowd.11901/ a useful read.

 

3TB in a RAID 5 is 9TB, 4 drives. The 5th drive is a physical spare. Where did the 24TB come from again?

In your "costs" section, you talked about "5 x HGST 6TB 3.5in 24x7 drives $370 each $1850.00 total." I was assuming 5 drives in a RAID 5 gives you 24TB space (or in a RAID6 array gives 18TB).

 

Basically if you rely on ZFS, rather than hardware RAID, you will be able to take the drives to another system running the same OS and access the data.

 

I admit I may be misunderstanding what you are aiming for...

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
I'm not really convinced on the need for RAID in home applications. Why is such high availability really required? If I have a hard drive failure (happens once a decade to me on average) I just listen to the radio for a few days until the replacement arrives. If you are using a dumb RAID mirroring approach then that's halved your requirement already, right?

 

"Better" filesystems are, on the hand, always welcome.

 

Because you can very very easily corrupt your data even if you are doing backups and if you care about your family photos ...

 

Basically the minimal expense of mirroring with ZFS will have you a huge amount of thinking about what you are doing...

Custom room treatments for headphone users.

Link to comment
So if I use the standard SAS/SATA controller which I have now that can't see anything beyond 1TB, by using ZFS , it can recognise the larger drive? Like Windows, reports the right size, how does the OS overcome the physical limitations of the hard disk controller?

 

3TB in a RAID 5 is 9TB, 4 drives. The 5th drive is a physical spare. Where did the 24TB come from again?

 

Get an LSI 9200 series SAS2 8 drive controller for <$100. Get 2 4tb Hitachi SAS drives and mirror them. Need more space get another 2 drives.

Custom room treatments for headphone users.

Link to comment
Because you can very very easily corrupt your data even if you are doing backups and if you care about your family photos ...

 

Basically the minimal expense of mirroring with ZFS will have you a huge amount of thinking about what you are doing...

Agree, but that's got nothing to do with hot redundancy (which, more specifically, is what I am raising as not-really-required).

 

I'm just raising the possibility that maybe high availability is not required for a home media setup and if there are concerns about storage cost that's one thing to look at.

 

bliss - fully automated music organizer. Read the music library management blog.

Link to comment
Agree, but that's got nothing to do with hot redundancy (which, more specifically, is what I am raising as not-really-required).

 

I'm just raising the possibility that maybe high availability is not required for a home media setup and if there are concerns about storage cost that's one thing to look at.

 

Your experience with drive failure does not equate with my experience. Typically drives fail regularly within two years of use. To date, the HGST 24 x7 drives are very good, three years and they are still running ok.

AS Profile Equipment List        Say NO to MQA

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×
×
  • Create New...