Discussion:
redundant storage
(too old to reply)
Julien Cigar
2016-06-03 08:38:43 UTC
Permalink
Hello,

I'm looking for a low-cost redundant HA storage solution for our (small)
team here (~30 people). It will be used to store files generated by some
webapps, to provide a redundant dovecot (imap) server, etc.

For the hardware I have to go with HP (no choice), so I planned to buy
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)

On the FreeBSD side I plan to use HAST with CARP, and the volumes will
be exported through NFS4.

Any comments on this setup (or other recommendations) ? :)

Thanks!
Julien
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
Steve O'Hara-Smith
2016-06-03 09:41:38 UTC
Permalink
Hi,

Just one change - don't use RAID1 use ZFS mirrors. ZFS does better
RAID than any hardware controller.

On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our (small)
team here (~30 people). It will be used to store files generated by some
webapps, to provide a redundant dovecot (imap) server, etc.
For the hardware I have to go with HP (no choice), so I planned to buy
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes will
be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
Steve O'Hara-Smith <***@sohara.org>
Julien Cigar
2016-06-03 10:14:46 UTC
Permalink
Post by Steve O'Hara-Smith
Hi,
Just one change - don't use RAID1 use ZFS mirrors. ZFS does better
RAID than any hardware controller.
right.. I must admit that I haven't looked at ZFS yet (I'm still using
UFS + gmirror), but it will be the opportunity to do so..!

Does ZFS play well with HAST?
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our (small)
team here (~30 people). It will be used to store files generated by some
webapps, to provide a redundant dovecot (imap) server, etc.
For the hardware I have to go with HP (no choice), so I planned to buy
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes will
be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
Steve O'Hara-Smith
2016-06-03 10:47:46 UTC
Permalink
On Fri, 3 Jun 2016 12:14:46 +0200
Post by Julien Cigar
Post by Steve O'Hara-Smith
Hi,
Just one change - don't use RAID1 use ZFS mirrors. ZFS does
better RAID than any hardware controller.
right.. I must admit that I haven't looked at ZFS yet (I'm still using
UFS + gmirror), but it will be the opportunity to do so..!
Does ZFS play well with HAST?
Never tried it but it should work well enough, ZFS sits on top of
geom providers so it should be possible to use the pool on the primary.

One concern would be that since all reads come from local storage
the secondary machine never gets scrubbed and silent corruption never gets
detected on the secondary. A periodic (say weekly) switch over and scrub
takes care of this concern. Silent corruption is rare, but the bigger the
pool and the longer it's used the more likely it is to happen eventually,
detection and repair of this is one of ZFSs advantages over hardware RAID
so it's good not to defeat it.

Drive failures on the primary will wind up causing both the primary
and the secondary to be rewritten when the drive is replaced - this could
probably be avoided by switching primaries and letting HAST deal with the
replacement.

Another very minor issue would be that any corrective rewrites (for
detected corruption) will happen on both copies but that's harmless and
there really should be *very* few of these.

One final concern, but it's HAST purely and not really ZFS. Writing
a large file flat out will likely saturate your LAN with half the capacity
going to copying the data for HAST. A private backend link between the two
boxes would be a good idea (or 10 gigabit ethernet).
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our
(small) team here (~30 people). It will be used to store files
generated by some webapps, to provide a redundant dovecot (imap)
server, etc.
For the hardware I have to go with HP (no choice), so I planned to buy
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes
will be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Julien Cigar
2016-06-03 11:50:20 UTC
Permalink
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 12:14:46 +0200
Post by Julien Cigar
Post by Steve O'Hara-Smith
Hi,
Just one change - don't use RAID1 use ZFS mirrors. ZFS does
better RAID than any hardware controller.
right.. I must admit that I haven't looked at ZFS yet (I'm still using
UFS + gmirror), but it will be the opportunity to do so..!
Does ZFS play well with HAST?
Never tried it but it should work well enough, ZFS sits on top of
geom providers so it should be possible to use the pool on the primary.
One concern would be that since all reads come from local storage
the secondary machine never gets scrubbed and silent corruption never gets
detected on the secondary. A periodic (say weekly) switch over and scrub
takes care of this concern. Silent corruption is rare, but the bigger the
pool and the longer it's used the more likely it is to happen eventually,
detection and repair of this is one of ZFSs advantages over hardware RAID
so it's good not to defeat it.
Thanks, I'll read a bit on ZFS this week-end ..!

My ultimate goal would be that the HAST storage survives an hard reboot/
unplugged network cable/... during an heavy I/O write, and that the
switch between the two nodes is transparent to the clients, without any
data loss of course ... feasible or utopian? Needless to say that what
I want to avoid at all cost is that the storage becomes corrupted and
unrecoverable..!
Post by Steve O'Hara-Smith
Drive failures on the primary will wind up causing both the primary
and the secondary to be rewritten when the drive is replaced - this could
probably be avoided by switching primaries and letting HAST deal with the
replacement.
Another very minor issue would be that any corrective rewrites (for
detected corruption) will happen on both copies but that's harmless and
there really should be *very* few of these.
One final concern, but it's HAST purely and not really ZFS. Writing
a large file flat out will likely saturate your LAN with half the capacity
going to copying the data for HAST. A private backend link between the two
boxes would be a good idea (or 10 gigabit ethernet).
yep, that's what I had in mind..! one nic for the replication between
the two HAST node, and one (CARP) nic by which clients access to
storage..
Post by Steve O'Hara-Smith
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our
(small) team here (~30 people). It will be used to store files
generated by some webapps, to provide a redundant dovecot (imap)
server, etc.
For the hardware I have to go with HP (no choice), so I planned to buy
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes
will be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
Valeri Galtsev
2016-06-03 14:34:24 UTC
Permalink
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 12:14:46 +0200
Post by Julien Cigar
Post by Steve O'Hara-Smith
Hi,
Just one change - don't use RAID1 use ZFS mirrors. ZFS does
better RAID than any hardware controller.
right.. I must admit that I haven't looked at ZFS yet (I'm still using
UFS + gmirror), but it will be the opportunity to do so..!
Does ZFS play well with HAST?
Never tried it but it should work well enough, ZFS sits on top of
geom providers so it should be possible to use the pool on the primary.
One concern would be that since all reads come from local storage
the secondary machine never gets scrubbed and silent corruption never
gets
detected on the secondary. A periodic (say weekly) switch over and scrub
takes care of this concern. Silent corruption is rare, but the bigger
the
pool and the longer it's used the more likely it is to happen
eventually,
detection and repair of this is one of ZFSs advantages over hardware
RAID
so it's good not to defeat it.
Thanks, I'll read a bit on ZFS this week-end ..!
My ultimate goal would be that the HAST storage survives an hard reboot/
unplugged network cable/... during an heavy I/O write, and that the
switch between the two nodes is transparent to the clients, without any
data loss of course ... feasible or utopian? Needless to say that what
I want to avoid at all cost is that the storage becomes corrupted and
unrecoverable..!
Sounds pretty much like distributed file system solution. I tried one
(moosefs) which I gave up on, and after I asked (on this list) for advise
about other options, next candidate for me emerged: glusterfs, which I
hadn't chance to set up yet. You may want to search this list archives,
those were really good advises that experts gave me.

Valeri
Post by Julien Cigar
Post by Steve O'Hara-Smith
Drive failures on the primary will wind up causing both the primary
and the secondary to be rewritten when the drive is replaced - this
could
probably be avoided by switching primaries and letting HAST deal with
the
replacement.
Another very minor issue would be that any corrective rewrites (for
detected corruption) will happen on both copies but that's harmless and
there really should be *very* few of these.
One final concern, but it's HAST purely and not really ZFS. Writing
a large file flat out will likely saturate your LAN with half the
capacity
going to copying the data for HAST. A private backend link between the
two
boxes would be a good idea (or 10 gigabit ethernet).
yep, that's what I had in mind..! one nic for the replication between
the two HAST node, and one (CARP) nic by which clients access to
storage..
Post by Steve O'Hara-Smith
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our
(small) team here (~30 people). It will be used to store files
generated by some webapps, to provide a redundant dovecot (imap)
server, etc.
For the hardware I have to go with HP (no choice), so I planned to
buy
Post by Julien Cigar
Post by Steve O'Hara-Smith
Post by Julien Cigar
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes
will be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the
sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247
++++++++++++++++++++++++++++++++++++++++
Julien Cigar
2016-06-03 15:01:20 UTC
Permalink
Post by Valeri Galtsev
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 12:14:46 +0200
Post by Julien Cigar
Post by Steve O'Hara-Smith
Hi,
Just one change - don't use RAID1 use ZFS mirrors. ZFS does
better RAID than any hardware controller.
right.. I must admit that I haven't looked at ZFS yet (I'm still using
UFS + gmirror), but it will be the opportunity to do so..!
Does ZFS play well with HAST?
Never tried it but it should work well enough, ZFS sits on top of
geom providers so it should be possible to use the pool on the primary.
One concern would be that since all reads come from local storage
the secondary machine never gets scrubbed and silent corruption never
gets
detected on the secondary. A periodic (say weekly) switch over and scrub
takes care of this concern. Silent corruption is rare, but the bigger
the
pool and the longer it's used the more likely it is to happen
eventually,
detection and repair of this is one of ZFSs advantages over hardware
RAID
so it's good not to defeat it.
Thanks, I'll read a bit on ZFS this week-end ..!
My ultimate goal would be that the HAST storage survives an hard reboot/
unplugged network cable/... during an heavy I/O write, and that the
switch between the two nodes is transparent to the clients, without any
data loss of course ... feasible or utopian? Needless to say that what
I want to avoid at all cost is that the storage becomes corrupted and
unrecoverable..!
Sounds pretty much like distributed file system solution. I tried one
(moosefs) which I gave up on, and after I asked (on this list) for advise
about other options, next candidate for me emerged: glusterfs, which I
hadn't chance to set up yet. You may want to search this list archives,
those were really good advises that experts gave me.
sorry but: I avoid distributed FS like the plague :)
Post by Valeri Galtsev
Valeri
Post by Julien Cigar
Post by Steve O'Hara-Smith
Drive failures on the primary will wind up causing both the primary
and the secondary to be rewritten when the drive is replaced - this
could
probably be avoided by switching primaries and letting HAST deal with
the
replacement.
Another very minor issue would be that any corrective rewrites (for
detected corruption) will happen on both copies but that's harmless and
there really should be *very* few of these.
One final concern, but it's HAST purely and not really ZFS. Writing
a large file flat out will likely saturate your LAN with half the
capacity
going to copying the data for HAST. A private backend link between the
two
boxes would be a good idea (or 10 gigabit ethernet).
yep, that's what I had in mind..! one nic for the replication between
the two HAST node, and one (CARP) nic by which clients access to
storage..
Post by Steve O'Hara-Smith
Post by Julien Cigar
Post by Steve O'Hara-Smith
On Fri, 3 Jun 2016 10:38:43 +0200
Post by Julien Cigar
Hello,
I'm looking for a low-cost redundant HA storage solution for our
(small) team here (~30 people). It will be used to store files
generated by some webapps, to provide a redundant dovecot (imap)
server, etc.
For the hardware I have to go with HP (no choice), so I planned to
buy
Post by Julien Cigar
Post by Steve O'Hara-Smith
Post by Julien Cigar
2 x HP ProLiant DL320e Gen8 v2 E3-1241v3 (768645-421) with
4 x WD Hard Drive Re SATA 4TB 3.5in 6gb/s 7200rpm 64MB Buffer
(WD4000FYYZ) in a RAID1 config (the machine has a smartarray P222
controller, which is apparently supported by the ciss driver)
On the FreeBSD side I plan to use HAST with CARP, and the volumes
will be exported through NFS4.
Any comments on this setup (or other recommendations) ? :)
Thanks!
Julien
--
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the
sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
++++++++++++++++++++++++++++++++++++++++
Valeri Galtsev
Sr System Administrator
Department of Astronomy and Astrophysics
Kavli Institute for Cosmological Physics
University of Chicago
Phone: 773-702-4247
++++++++++++++++++++++++++++++++++++++++
--
Julien Cigar
Belgian Biodiversity Platform (http://www.biodiversity.be)
PGP fingerprint: EEF9 F697 4B68 D275 7B11 6A25 B2BB 3710 A204 23C0
No trees were killed in the creation of this message.
However, many electrons were terribly inconvenienced.
Steve O'Hara-Smith
2016-06-03 15:14:07 UTC
Permalink
On Fri, 3 Jun 2016 17:01:20 +0200
Post by Julien Cigar
sorry but: I avoid distributed FS like the plague :)
The only one I'd rely on is commercial and far from cheap, also I
work on it so I'm not going to plug (or even name) it here. I have high
hopes for HAMMER-2 but I'm not holding my breath for Matt to finish it.
--
Steve O'Hara-Smith <***@sohara.org>
Loading...