think tank forum

technology » zfs

lucas's avatar
13 years ago
link
lucas
i ❤ demo
so is zfs production-ready on freebsd?

i'm thinking about migrating my geom raid 10 array to a zfs raidz2 (raid 6) array.

i guess freebsd can boot from zfs as of freebsd 8 (link).
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
Yes, it's production ready. I haven't used it myself but I know several people who use it extensively in business environments.

That being said, I still prefer UFS.
lucas's avatar
13 years ago
link
lucas
i ❤ demo
why do you prefer ufs?

if you were to set up raid 6, how would you? would use pay $500 for a hardware card and use ufs? or would you simply use zfs?

the machine i'm doing this all with has plenty of ram and clock cycles to do software raid. i don't need more hardware. :)
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
> why do you prefer ufs?

Simpeler to setup.

> if you were to set up raid 6, how would you?

I would avoid RAID-5/RAID-6 and opt for RAID-1
lucas's avatar
13 years ago
link
lucas
i ❤ demo
> I would avoid RAID-5/RAID-6 and opt for RAID-1

why?

right now i'm running raid 10. the problem is that i can only lose one disk before i'm at risk for data loss. i want to be able to lose any two disks before being at risk.

zfs's auto hot spare feature is cool, though. maybe raid 10 with an auto hot spare would be nearly as good as raid 6.
ozntz's avatar
13 years ago
link
ozntz
toooooooooooooooooooooooooooooooooo
If you loose one drive and the second already has hard errors but not repaired you could even lose data with one drive failure.

I use raid 10 where I need high performance ex Exchange DB and ESX(prob wouldn't notice it is on SAN with 2GB SPs).Raid 5 on arrays less then 8 enterprise disks and raid 6 on 8+. While I haven't had a problem I have always opted for the higher end controller for the raid 6 configurations.

Anyone seen a performance comparison of software raid 5 vs 6?
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
> > I would avoid RAID-5/RAID-6 and opt for RAID-1
>
> why?

I spent too many hours unfucking RAID-5 arrays to trust them.

Examples:
- One drive fails. Before it gets replaced another drive fails. Oops!
- One drive fails and the RAID controller fucks up the rebuild. Oops!
- One drive fails and during rebuild it appears that other drive(s) have bad sectors. Oops!
- One drive fails, the RAID controller fucks up and re-adds the drive and start to rebuild. Oops!

In theory, RAID-5 is a great storage solution. In the real world, it's not so great.

I'm not saying you should always avoid RAID-5 under any and all circumstances, just be aware that it's a complex setup and when it breaks, it can be a pain to unbreak if. IF it's even possible to unbreak it.

A RAID-1 (Or RAID-10 if you *must* have the performance, but I would avoid striping all together if possible) setup is usually not a whole lot more expensive.
This ofcourse depends on the amount of space you need.
If you need 40TB. Then RAID-5 with a tape backup is probably a good solution. If you "only" need a couple of TB, then RAID-1 will do just fine IMHO.

YMMV though ...
lucas's avatar
13 years ago
link
lucas
i ❤ demo
> One drive fails. Before it gets replaced another drive fails. Oops!

also an issue with raid 1 (assuming two providers in the set). that's why i wanted raid 6, with which this is a non-issue.

> One drive fails and the RAID controller fucks up the rebuild. Oops!

this is indeed very scary for both raid 5 and raid 6. even the high-end controllers on the market sound ghetto from the reviews.

> One drive fails and during rebuild it appears that other drive(s) have bad sectors. Oops!

this is also very scary and certainly an big issue with large-capacity consumer-grade disks. it's probably also an issue with enterprise-grade disks.

> One drive fails, the RAID controller fucks up and re-adds the drive and start to rebuild. Oops!

i can't imagine that happening (i'm not saying it can't nor won't). weird.

> In theory, RAID-5 is a great storage solution. In the real world, it's not so great.

theory doesn't matter. the origin of raid is the real world. if the raid type doesn't satisfy real-world needs, it is not good in theory nor practice. raid 5 sucks because, even if the controller is perfect and never fucks up, it only protects against one drive failure, and all of the other drives have to be perfectly error-free (good luck).

i'm pretty worried about parity. it seems like a messy business. that said, i think zfs raidz2 is more trustworthy that most raid controllers on the market. i would never run raid 5--only raid 6 or higher.

i'm not using raid 0 for performance. i'm using it because i need a large volume (large enough that it needs to span multiple disks). what should i do instead? concat? if i'm going to concat, i may as well gain some performance and stripe.
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
>> One drive fails, the RAID controller fucks up and re-adds the drive and start to rebuild. Oops!
>
> i can't imagine that happening (i'm not saying it can't nor won't). weird.

It happened to a customer last week :-)

>> One drive fails. Before it gets replaced another drive fails. Oops!
>
> also an issue with raid 1 (assuming two providers in the set).

True, but IF it happens you're in a much better situation.

Example I worked on a few weeks back:
A customer with a 4-disk RAID-5 has one failed disk (This disk is *broken*, I sent it to OnTrack and they said the magnatic surface is damaged beyond repair).
However, I also discovered that another disk was "silently" removed from the array a couple of months earlier: So the "rebuild" doesn't give you a working filesysten, but since many blocks haven't changed in a few months, those file should be intact.
I recovered some files with photorec, but you lose the directory structure and filenames this way ... Maybe I can recover some stuff from the metadata in the files ... This is a ongoing project.

This is a business who is legally obliged to his data available. So even old data is valuable ...

If this had happened with RAID-1, I would have been able to use the older disk without to many problems .... It also would cost our customer much less than 2500 euro that it's going to cost him now ...

> that's why i wanted raid 6, with which this is a non-issue.

I don't know ... If two disks can fail, then three disks can also fail ...
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
> i'm not using raid 0 for performance. i'm using it because i need a large volume (large enough that
> it needs to span multiple disks). what should i do instead? concat? if i'm going to concat, i may as
> well gain some performance and stripe.

Again, because it's much more difficult to fix when it breaks.

I would use 2xRAID-1 and put that in concat. Or create two concat spans and those that in RAID-1 (Same effect).

If you use FreeBSD gmirror, you already get performance boosts on read operations. You have some options to tune this by the way, see gmirror(8).
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
> i'm pretty worried about parity. it seems like a messy business.

It's just XOR operations, not that difficult really.
http://www.scottklarr.com/topic/23/how-raid-5-really-works/

> i think zfs raidz2 is more trustworthy that most raid controllers on the market.

The software RAID vs. hardware RAID discussion is an old and unresolved one ... But yeah, I would also go with software raid if you don't need super performance.
lucas's avatar
13 years ago
link
lucas
i ❤ demo
> It's just XOR operations, not that difficult really.

i don't mean the math. i mean the complications of its use, as evidenced (in part) by your examples.
Carpetsmoker's avatar
13 years ago
link
Carpetsmoker
Martin
Yeah, when it goes wrong, it *really* goes wrong.

I'm not saying you shouldn't choose RAID-5 or RAID-6, just be aware of the disadvantages and issues that are associated with it ...