Discussion:
ZFS: Is 'zpool add' really irreversible?
(too old to reply)
Yuri
2016-06-12 20:56:18 UTC
Permalink
I added a device to the ZFS pool with 'zpool add' command, and now while
# zpool remove xpool ada1
cannot remove ada1: only inactive hot spares, cache, top-level, or
log devices can be removed


Some messages from 2008 suggest that this can't be undone, and "work is
being done to add this capability".

So is it still irreversible, or maybe FreeBSD just has an version of ZFS?

This is very surprising that something as simple as that can't be undone.


I admit I don't know much about ZFS, only use it one one disk.


FreeBSD 10.3


Yuri
Brandon J. Wandersee
2016-06-12 21:58:19 UTC
Permalink
Post by Yuri
This is very surprising that something as simple as that can't be undone.
`zpool add` adds *virtual* devices to a *pool*, while `zfs attach` adds
physical devices to a mirrored device. So individual disks can be added
to and removed from mirrored virtual devices, but virtual devices cannot
be removed from a pool.

A ZFS pool is striped across all virtual devices included in the pool,
with data and metadata in the pool distributed across the physical
devices that make up those virtual devices, in order to improve
performance, redundancy, or both. Adding a virtual device doesn't merely
make more space available---that virtual device is actually in use from
the moment it is added, beginning with a resilver operation.

From the perspective of ZFS, removing a virtual device would be similar
in effect to removing a drive from your machine, sawing it in half, and
putting it back in. There's still data on that half-disk, but you'll
never get to it. ZFS tries the same thing every time a pool is imported,
searching for all devices that belong to it and panicking when
devices/data that should exist do not.

So yes, once you've added a virtual device to a pool it is a permanent
part of the pool. You should plan out your storage to accommodate your
needs for the foreseeable future---typically, you would actually make
the pool larger than what you need right now in anticipation of
eventially adding more data. ZFS was essentially designed as a long-term
use, "archival" filesystem; shrinking a pool isn't something that would
ever really happen in situations where ZFS is appropriate.
--
:: Brandon J. Wandersee
:: ***@gmail.com
:: --------------------------------------------------
:: 'The best design is as little design as possible.'
:: --- Dieter Rams ----------------------------------
Yuri
2016-06-12 22:14:12 UTC
Permalink
`zpool add` adds*virtual* devices to a*pool*, while `zfs attach` adds
physical devices to a mirrored device. So individual disks can be added
to and removed from mirrored virtual devices, but virtual devices cannot
be removed from a pool.
Thank you for your answer. I see that ZFS is designed this way.

But I can't say I like this part of ZFS design. Because this isn't a
physical disk with the set size, but a combination of disks. People may
reasonably want to remove some disks in some layouts, due to failures,
etc, and ZFS just lacks the flexibility to do that.


Thanks!

Yuri
Matthew Seaman
2016-06-12 22:50:25 UTC
Permalink
Post by Yuri
`zpool add` adds*virtual* devices to a*pool*, while `zfs attach` adds
physical devices to a mirrored device. So individual disks can be added
to and removed from mirrored virtual devices, but virtual devices cannot
be removed from a pool.
Thank you for your answer. I see that ZFS is designed this way.
But I can't say I like this part of ZFS design. Because this isn't a
physical disk with the set size, but a combination of disks. People may
reasonably want to remove some disks in some layouts, due to failures,
etc, and ZFS just lacks the flexibility to do that.
You should have seen Matt Ahren's talk at BSDCan this year. (It might
be available on YouTube in the next few days -- depends if it was
livestreamed or not. Unfortunately, while there were lecture 4 tracks,
there was only one set of video kit to livestream with...)

Suffice it to say though that there is already fix for those 'Oh No! I
really didn't mean to type "zfs add" there' moments in the upstream
OpenZFS repo, which will be coming to a FreeBSD repository near you Real
Soon Now. You get a grace period within which you can undo that sort of
mistake.

Cheers,

Matthew
Brandon J. Wandersee
2016-06-13 00:47:17 UTC
Permalink
People may reasonably want to remove some disks in some layouts, due
to failures, etc, and ZFS just lacks the flexibility to do that.
Obviously, if you're dealing with any sort of RAID you'll occasionally
be replacing failed disks; ZFS is no different, and you can swap out
disks while the system is running. You just can't remove a *virtual
device*. You can't just shuffle disks around willy-nilly, because you'd
effectively destroy the storage pool in the process. There's a minimum
number of disks that need to be attached, and that minimum changes as
you add virtual (not necessarily physical) devices to a
pool. Traditional RAID has the same sort of limitation: create a RAID 5
array out of three disks, then remove two disks. You've just destroyed
the array.

If you want to temporarily add a single disk to a system, you can just
create a second pool on it. There's no arbitrary limit to how many pools
a system can have. ZFS has real limitations, but they're not that strict.
--
:: Brandon J. Wandersee
:: ***@gmail.com
:: --------------------------------------------------
:: 'The best design is as little design as possible.'
:: --- Dieter Rams ----------------------------------
Jeremy Faulkner
2016-06-13 12:47:52 UTC
Permalink
Post by Matthew Seaman
Post by Yuri
`zpool add` adds*virtual* devices to a*pool*, while `zfs attach` adds
physical devices to a mirrored device. So individual disks can be added
to and removed from mirrored virtual devices, but virtual devices cannot
be removed from a pool.
Thank you for your answer. I see that ZFS is designed this way.
But I can't say I like this part of ZFS design. Because this isn't a
physical disk with the set size, but a combination of disks. People may
reasonably want to remove some disks in some layouts, due to failures,
etc, and ZFS just lacks the flexibility to do that.
You should have seen Matt Ahren's talk at BSDCan this year. (It might
be available on YouTube in the next few days -- depends if it was
livestreamed or not. Unfortunately, while there were lecture 4 tracks,
there was only one set of video kit to livestream with...)
Suffice it to say though that there is already fix for those 'Oh No! I
really didn't mean to type "zfs add" there' moments in the upstream
OpenZFS repo, which will be coming to a FreeBSD repository near you Real
Soon Now. You get a grace period within which you can undo that sort of
mistake.
Cheers,
Matthew
It appears that multiple days of videos got merged as there are talks
from the devsummit merged with talks from day one of BSDCan. Here's
Matt's talk:



Jeremy Faulkner
Matthew Seaman
2016-06-13 14:25:21 UTC
Permalink
Post by Jeremy Faulkner
It appears that multiple days of videos got merged as there are talks
from the devsummit merged with talks from day one of BSDCan. Here's
http://youtu.be/AOidjSS7Hsg
Of course, I'm mixing my events up. Matt's talk was great, but didn't
cover the OP's point. That was actually the ZFS BoF hosted by Matt
Ahrens and Allan Jude.

(I blame an unholy mix of conference derived sleep deprivation and jey
lag...)

Cheers,

Matthew
Matthew Ahrens
2016-06-16 16:10:24 UTC
Permalink
*Matthew Seaman Wrote:*
Post by Matthew Seaman
Suffice it to say though that there is already fix for those 'Oh No! I
really didn't mean to type "zfs add" there' moments in the upstream
OpenZFS repo
That's unfortunately not the case. Device removal has been implemented in
Delphix but not yet upstreamed to OpenZFS / illumos. We hope to upstream
it in the next 6 months.
Post by Matthew Seaman
You get a grace period within which you can undo that sort of
mistake.
With device removal, there is no pre-set "grace period"; you can remove a
device even long after it has been added (though of course it might have
more data stored on it which will take longer to move).

--matt
Steve O'Hara-Smith
2016-06-16 17:18:56 UTC
Permalink
On Thu, 16 Jun 2016 12:10:24 -0400
Post by Matthew Ahrens
*Matthew Seaman Wrote:*
Post by Matthew Seaman
Suffice it to say though that there is already fix for those 'Oh No! I
really didn't mean to type "zfs add" there' moments in the upstream
OpenZFS repo
That's unfortunately not the case. Device removal has been implemented in
Delphix but not yet upstreamed to OpenZFS / illumos. We hope to upstream
it in the next 6 months.
Very nice, now all we need is extending raidz(x) vdevs with some
kind of background restripe of existing data and all sane manipulations are
possible. I'm not holding my breath for that one, and would not be
surprised if it never happens, but it would be handy.
--
Steve O'Hara-Smith <***@sohara.org>
Loading...