Discussion:
ZFS migration - New pool lost after reboot
(too old to reply)
Sebastian Wolfgarten
2016-04-28 21:14:19 UTC
Permalink
Dear all,

I have a bit of an issue and I hope you are able to point me into the right direction for solving this.

Following a hard disk failure I have accidentally created a striped ZFS mirror and now I am trying to turn it (back) into a mirrored pool via a temporary hard disk I put into the server. This was in response to a post I did some time ago (https://groups.google.com/forum/#!topic/mailing.freebsd.questions/qT-2WTscBqM <https://groups.google.com/forum/#!topic/mailing.freebsd.questions/qT-2WTscBqM>).

Now ada0 is the replacement disk, I would like to use it to rebuild my mirrored pool in line with some instructions I have found online (see https://blog.grem.de/sysadmin/Shrinking-ZFS-Pool-2014-05-29-21-00.html <https://blog.grem.de/sysadmin/Shrinking-ZFS-Pool-2014-05-29-21-00.html>), however I am experiencing some problems. Here is what I did:

0) Existing zroot pool to migrate

zroot/ROOT 126G 5,02T 96K none
zroot/ROOT/default 126G 5,02T 126G /
zroot/tmp 9,69M 5,02T 9,69M /tmp
zroot/usr 34,8G 5,02T 96K /usr
zroot/usr/home 30,2G 5,02T 30,2G /usr/home
zroot/usr/ports 4,58G 5,02T 4,58G /usr/ports
zroot/usr/src 6,64M 5,02T 6,64M /usr/src
zroot/var 92,7G 5,02T 96K /var
zroot/var/crash 96K 5,02T 96K /var/crash
zroot/var/log 412M 5,02T 412M /var/log
zroot/var/mail 92,3G 5,02T 92,3G /var/mail
zroot/var/tmp 100K 5,02T 100K /var/tmp

1) Create required partitions on temporary hard disk ada0
gpart create -s GPT ada0
gpart add -t freebsd-boot -s 128 ada0
gpart add -t freebsd-swap -s 4G -l newswap ada0
gpart add -t freebsd-zfs -l newdisk ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

2) Create new pool (newpool)

zpool create -o cachefile=/tmp/zpool.cache newpool gpt/newdisk

3) Create snapshot of existing zroot pool and copy it over to new pool
zfs snapshot -r ***@movedata
zfs send -vR ***@movedata | zfs receive -vFd newpool
zfs destroy -r ***@movedata

4) Make the new pool bootable

zpool set bootfs=newpool/ROOT/default newpool

5) Mount new pool and prepare for reboot

cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
zfs set mountpoint=/ newpool/ROOT
reboot

When I execute the steps above, the new pool is not imported automatically after reboot and as such, the system boots from the old zpool. Furthermore the new pool only features /tmp, /var and /usr if I manually import it afterwards - any ideas why the full structure of the zroot pool is not copied either? Furthermore any ideas on how to permanently import the new pool?

Lastly if somebody has got (better) instructions for how to migrate a zfs pool, please let me know.

Many thanks.

Best regards
Sebastian
Matthias Fechner
2016-04-29 08:25:24 UTC
Permalink
Post by Sebastian Wolfgarten
5) Mount new pool and prepare for reboot
cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
zfs set mountpoint=/ newpool/ROOT
reboot
I think you forgot to adapt vfs.zfs.mountfrom= in /boot/loader.conf on
the new pool?



Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
Sebastian Wolfgarten
2016-05-02 19:43:43 UTC
Permalink
Hi Matthias,
dear list,

I have build a new VM to test this further without affecting my live machine. When doing all these steps (including the amendment of loader.conf on the new pool), my system will boots up with the old pool. Any ideas why?

Here is what I did:

1) Create required partitions on temporary hard disk ada2
gpart create -s GPT ada2
gpart add -t freebsd-boot -s 128 ada2
gpart add -t freebsd-swap -s 4G -l newswap ada2
gpart add -t freebsd-zfs -l newdisk ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2

2) Create new pool (newpool)

zpool create -o cachefile=/tmp/zpool.cache newpool gpt/newdisk

3) Create snapshot of existing zroot pool and copy it over to new pool
zfs snapshot -r ***@movedata
zfs send -vR ***@movedata | zfs receive -vFd newpool
zfs destroy -r ***@movedata

4) Make the new pool bootable

zpool set bootfs=newpool/ROOT/default newpool

5) Mount new pool and prepare for reboot

cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=„0“ changed to „2"
zfs set mountpoint=/ newpool/ROOT
reboot

After the reboot, the machine is still running of the old zfs striped mirror but I can mount the newpool without any problems:

***@vm:~ # cat /boot/loader.conf
kern.geom.label.gptid.enable="0"
zfs_load="YES"
***@vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool
***@vm:~ # cd /mnt
***@vm:/mnt # ls -la
total 50
drwxr-xr-x 19 root wheel 26 May 2 23:33 .
drwxr-xr-x 18 root wheel 25 May 2 23:37 ..
-rw-r--r-- 2 root wheel 966 Mar 25 04:52 .cshrc
-rw-r--r-- 2 root wheel 254 Mar 25 04:52 .profile
-rw------- 1 root wheel 1024 May 2 01:45 .rnd
-r--r--r-- 1 root wheel 6197 Mar 25 04:52 COPYRIGHT
drwxr-xr-x 2 root wheel 47 Mar 25 04:51 bin
-rw-r--r-- 1 root wheel 9 May 2 23:27 bla
drwxr-xr-x 8 root wheel 47 May 2 01:44 boot
drwxr-xr-x 2 root wheel 2 May 2 01:32 dev
-rw------- 1 root wheel 4096 May 2 23:21 entropy
drwxr-xr-x 23 root wheel 107 May 2 01:46 etc
drwxr-xr-x 3 root wheel 52 Mar 25 04:52 lib
drwxr-xr-x 3 root wheel 4 Mar 25 04:51 libexec
drwxr-xr-x 2 root wheel 2 Mar 25 04:51 media
drwxr-xr-x 2 root wheel 2 Mar 25 04:51 mnt
drwxr-xr-x 2 root wheel 2 May 2 23:33 newpool
dr-xr-xr-x 2 root wheel 2 Mar 25 04:51 proc
drwxr-xr-x 2 root wheel 147 Mar 25 04:52 rescue
drwxr-xr-x 2 root wheel 7 May 2 23:27 root
drwxr-xr-x 2 root wheel 133 Mar 25 04:52 sbin
lrwxr-xr-x 1 root wheel 11 Mar 25 04:52 sys -> usr/src/sys
drwxrwxrwt 6 root wheel 7 May 2 23:33 tmp
drwxr-xr-x 16 root wheel 16 Mar 25 04:52 usr
drwxr-xr-x 24 root wheel 24 May 2 23:21 var
drwxr-xr-x 2 root wheel 2 May 2 01:32 zroot
***@vm:/mnt # cd boot
***@vm:/mnt/boot # cat loader.conf
kern.geom.label.gptid.enable="2"
zfs_load=„YES"

My question is: How do I make my system permanently boot off the newpool such that I can destroy the existing zroot one?

Many thanks for your help, it is really appreciated.

Best regards
Sebastian
Post by Sebastian Wolfgarten
5) Mount new pool and prepare for reboot
cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
zfs set mountpoint=/ newpool/ROOT
reboot
I think you forgot to adapt vfs.zfs.mountfrom= in /boot/loader.conf on the new pool?
Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
Sebastian Wolfgarten
2016-05-02 20:42:47 UTC
Permalink
Hi,

just to follow-up on my own email earlier on - I managed to get the new pool booting by amending /boot/loader.conf as follows:

***@vm:~ # cat /boot/loader.conf
vfs.root.mountfrom="zfs:newpool/ROOT/default"
kern.geom.label.gptid.enable="2"
zfs_load="YES"

However, when rebooting I can see he is using the new pool however I am running into issues as he can’t seem to find some essential files in /usr:

Mounting local file systems
eval: zfs not found
eval: touch not found
/etc/rc: cannot create /dev/null: No such file or directory
/etc/rc: date: not found

Here is what „zfs list“ looks like:

***@vm:~ # zfs list
NAME USED AVAIL REFER MOUNTPOINT
newpool 385M 5.41G 19K /mnt/zroot
newpool/ROOT 385M 5.41G 19K /mnt
newpool/ROOT/default 385M 5.41G 385M /mnt
newpool/tmp 21K 5.41G 21K /mnt/tmp
newpool/usr 76K 5.41G 19K /mnt/usr
newpool/usr/home 19K 5.41G 19K /mnt/usr/home
newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports
newpool/usr/src 19K 5.41G 19K /mnt/usr/src
newpool/var 139K 5.41G 19K /mnt/var
newpool/var/audit 19K 5.41G 19K /mnt/var/audit
newpool/var/crash 19K 5.41G 19K /mnt/var/crash
newpool/var/log 44K 5.41G 44K /mnt/var/log
newpool/var/mail 19K 5.41G 19K /mnt/var/mail
newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp
zroot 524M 26.4G 96K /zroot
zroot/ROOT 522M 26.4G 96K none
zroot/ROOT/default 522M 26.4G 522M /
zroot/tmp 74.5K 26.4G 74.5K /tmp
zroot/usr 384K 26.4G 96K /usr
zroot/usr/home 96K 26.4G 96K /usr/home
zroot/usr/ports 96K 26.4G 96K /usr/ports
zroot/usr/src 96K 26.4G 96K /usr/src
zroot/var 580K 26.4G 96K /var
zroot/var/audit 96K 26.4G 96K /var/audit
zroot/var/crash 96K 26.4G 96K /var/crash
zroot/var/log 103K 26.4G 103K /var/log
zroot/var/mail 96K 26.4G 96K /var/mail
zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp

I am assuming I have to amend the zfs parameters for the mount points but I can’t seem to figure out what’s wrong. I tried things like:

zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/var newpool/var

Unfortunately this did not solve the issue. Any ideas?

Many thanks.

Best regards
Sebastian
Post by Sebastian Wolfgarten
Hi Matthias,
dear list,
I have build a new VM to test this further without affecting my live machine. When doing all these steps (including the amendment of loader.conf on the new pool), my system will boots up with the old pool. Any ideas why?
1) Create required partitions on temporary hard disk ada2
gpart create -s GPT ada2
gpart add -t freebsd-boot -s 128 ada2
gpart add -t freebsd-swap -s 4G -l newswap ada2
gpart add -t freebsd-zfs -l newdisk ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
2) Create new pool (newpool)
zpool create -o cachefile=/tmp/zpool.cache newpool gpt/newdisk
3) Create snapshot of existing zroot pool and copy it over to new pool
4) Make the new pool bootable
zpool set bootfs=newpool/ROOT/default newpool
5) Mount new pool and prepare for reboot
cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
in /mnt/boot/loader.conf the value of kern.geom.label.gptid.enable=„0“ changed to „2"
zfs set mountpoint=/ newpool/ROOT
reboot
kern.geom.label.gptid.enable="0"
zfs_load="YES"
total 50
drwxr-xr-x 19 root wheel 26 May 2 23:33 .
drwxr-xr-x 18 root wheel 25 May 2 23:37 ..
-rw-r--r-- 2 root wheel 966 Mar 25 04:52 .cshrc
-rw-r--r-- 2 root wheel 254 Mar 25 04:52 .profile
-rw------- 1 root wheel 1024 May 2 01:45 .rnd
-r--r--r-- 1 root wheel 6197 Mar 25 04:52 COPYRIGHT
drwxr-xr-x 2 root wheel 47 Mar 25 04:51 bin
-rw-r--r-- 1 root wheel 9 May 2 23:27 bla
drwxr-xr-x 8 root wheel 47 May 2 01:44 boot
drwxr-xr-x 2 root wheel 2 May 2 01:32 dev
-rw------- 1 root wheel 4096 May 2 23:21 entropy
drwxr-xr-x 23 root wheel 107 May 2 01:46 etc
drwxr-xr-x 3 root wheel 52 Mar 25 04:52 lib
drwxr-xr-x 3 root wheel 4 Mar 25 04:51 libexec
drwxr-xr-x 2 root wheel 2 Mar 25 04:51 media
drwxr-xr-x 2 root wheel 2 Mar 25 04:51 mnt
drwxr-xr-x 2 root wheel 2 May 2 23:33 newpool
dr-xr-xr-x 2 root wheel 2 Mar 25 04:51 proc
drwxr-xr-x 2 root wheel 147 Mar 25 04:52 rescue
drwxr-xr-x 2 root wheel 7 May 2 23:27 root
drwxr-xr-x 2 root wheel 133 Mar 25 04:52 sbin
lrwxr-xr-x 1 root wheel 11 Mar 25 04:52 sys -> usr/src/sys
drwxrwxrwt 6 root wheel 7 May 2 23:33 tmp
drwxr-xr-x 16 root wheel 16 Mar 25 04:52 usr
drwxr-xr-x 24 root wheel 24 May 2 23:21 var
drwxr-xr-x 2 root wheel 2 May 2 01:32 zroot
kern.geom.label.gptid.enable="2"
zfs_load=„YES"
My question is: How do I make my system permanently boot off the newpool such that I can destroy the existing zroot one?
Many thanks for your help, it is really appreciated.
Best regards
Sebastian
Post by Sebastian Wolfgarten
5) Mount new pool and prepare for reboot
cp /tmp/zpool.cache /tmp/newpool.cache
zpool export newpool
zpool import -c /tmp/newpool.cache -R /mnt newpool
cp /tmp/newpool.cache /mnt/boot/zfs/zpool.cache
zfs set mountpoint=/ newpool/ROOT
reboot
I think you forgot to adapt vfs.zfs.mountfrom= in /boot/loader.conf on the new pool?
Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-questions
Matthias Fechner
2016-05-02 21:45:25 UTC
Permalink
Post by Sebastian Wolfgarten
NAME USED AVAIL REFER MOUNTPOINT
newpool 385M 5.41G 19K /mnt/zroot
newpool/ROOT 385M 5.41G 19K /mnt
newpool/ROOT/default 385M 5.41G 385M /mnt
newpool/tmp 21K 5.41G 21K /mnt/tmp
newpool/usr 76K 5.41G 19K /mnt/usr
newpool/usr/home 19K 5.41G 19K /mnt/usr/home
newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports
newpool/usr/src 19K 5.41G 19K /mnt/usr/src
newpool/var 139K 5.41G 19K /mnt/var
newpool/var/audit 19K 5.41G 19K /mnt/var/audit
newpool/var/crash 19K 5.41G 19K /mnt/var/crash
newpool/var/log 44K 5.41G 44K /mnt/var/log
newpool/var/mail 19K 5.41G 19K /mnt/var/mail
newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp
zroot 524M 26.4G 96K /zroot
zroot/ROOT 522M 26.4G 96K none
zroot/ROOT/default 522M 26.4G 522M /
zroot/tmp 74.5K 26.4G 74.5K /tmp
zroot/usr 384K 26.4G 96K /usr
zroot/usr/home 96K 26.4G 96K /usr/home
zroot/usr/ports 96K 26.4G 96K /usr/ports
zroot/usr/src 96K 26.4G 96K /usr/src
zroot/var 580K 26.4G 96K /var
zroot/var/audit 96K 26.4G 96K /var/audit
zroot/var/crash 96K 26.4G 96K /var/crash
zroot/var/log 103K 26.4G 103K /var/log
zroot/var/mail 96K 26.4G 96K /var/mail
zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp
zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/var newpool/var
zfs set mountpoint=none newpool/ROOT
zfs set mountpoin=/ newpool/ROOT/default
zfs set mountpoin=/tmp newpool/tmp
zfs set mountpoin=/usr newpool/usr
zfs set mountpoin=/var newpool/var


Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
Sebastian Wolfgarten
2016-05-03 21:07:35 UTC
Permalink
Dear all,

thanks to Matthias I already fixed most of the issues but there is one thing I cannot fix yet. When trying to set the mount point for the / file system, I am getting strange errors:

***@vm:~ # zfs set mountpoint=none newpool/ROOT
cannot open 'newpool/ROOT': dataset does not exist
***@vm:~ # zpool import -c /tmp/newpool.cache -R /mnt newpool
***@vm:~ # zfs set mountpoint=none newpool/ROOT
cannot unmount '/mnt': Device busy
***@vm:~ # zfs set mountpoint=/ newpool/ROOT/default
cannot unmount '/mnt': Device busy
***@vm:~ # zfs set mountpoint=/tmp newpool/tmp
***@vm:~ # zfs set mountpoint=/usr newpool/usr
***@vm:~ # zfs set mountpoint=/var newpool/var
***@vm:~ # zfs set mountpoint=none newpool/ROOT
cannot unmount '/mnt': Device busy
***@vm:~ # zpool export newpool
***@vm:~ # zfs set mountpoint=none newpool/ROOT
cannot open 'newpool/ROOT': dataset does not exist

So basically, when the pool is not mounted the system says „dataset does not exist“ but when I mount it and try to change the mount point it comes back with „Device busy“. Any ideas on how I am supposed to set the mount point for the root file system (first & last line of the commands listed above)?

Many thanks.

Kind regards
Sebastian
Post by Matthias Fechner
Post by Sebastian Wolfgarten
NAME USED AVAIL REFER MOUNTPOINT
newpool 385M 5.41G 19K /mnt/zroot
newpool/ROOT 385M 5.41G 19K /mnt
newpool/ROOT/default 385M 5.41G 385M /mnt
newpool/tmp 21K 5.41G 21K /mnt/tmp
newpool/usr 76K 5.41G 19K /mnt/usr
newpool/usr/home 19K 5.41G 19K /mnt/usr/home
newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports
newpool/usr/src 19K 5.41G 19K /mnt/usr/src
newpool/var 139K 5.41G 19K /mnt/var
newpool/var/audit 19K 5.41G 19K /mnt/var/audit
newpool/var/crash 19K 5.41G 19K /mnt/var/crash
newpool/var/log 44K 5.41G 44K /mnt/var/log
newpool/var/mail 19K 5.41G 19K /mnt/var/mail
newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp
zroot 524M 26.4G 96K /zroot
zroot/ROOT 522M 26.4G 96K none
zroot/ROOT/default 522M 26.4G 522M /
zroot/tmp 74.5K 26.4G 74.5K /tmp
zroot/usr 384K 26.4G 96K /usr
zroot/usr/home 96K 26.4G 96K /usr/home
zroot/usr/ports 96K 26.4G 96K /usr/ports
zroot/usr/src 96K 26.4G 96K /usr/src
zroot/var 580K 26.4G 96K /var
zroot/var/audit 96K 26.4G 96K /var/audit
zroot/var/crash 96K 26.4G 96K /var/crash
zroot/var/log 103K 26.4G 103K /var/log
zroot/var/mail 96K 26.4G 96K /var/mail
zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp
zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/var newpool/var
zfs set mountpoint=none newpool/ROOT
zfs set mountpoin=/ newpool/ROOT/default
zfs set mountpoin=/tmp newpool/tmp
zfs set mountpoin=/usr newpool/usr
zfs set mountpoin=/var newpool/var
Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-questions
Sebastian Wolfgarten
2016-05-03 21:56:45 UTC
Permalink
Hi,

to solve this one (for the archives maybe):

***@vm:~ # zpool import -N newpool
***@vm:~ # zfs set mountpoint=/ newpool/ROOT/default
zfs set mountpoint=none newpool/ROOT
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/var newpool/var

reboot

Machine now starts like a charm with the new pool. Amazing.

Thanks for all your help guys, this is very much appreciated.

Best regards
Sebastian
Post by Sebastian Wolfgarten
Dear all,
cannot open 'newpool/ROOT': dataset does not exist
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot open 'newpool/ROOT': dataset does not exist
So basically, when the pool is not mounted the system says „dataset does not exist“ but when I mount it and try to change the mount point it comes back with „Device busy“. Any ideas on how I am supposed to set the mount point for the root file system (first & last line of the commands listed above)?
Many thanks.
Kind regards
Sebastian
Post by Matthias Fechner
Post by Sebastian Wolfgarten
NAME USED AVAIL REFER MOUNTPOINT
newpool 385M 5.41G 19K /mnt/zroot
newpool/ROOT 385M 5.41G 19K /mnt
newpool/ROOT/default 385M 5.41G 385M /mnt
newpool/tmp 21K 5.41G 21K /mnt/tmp
newpool/usr 76K 5.41G 19K /mnt/usr
newpool/usr/home 19K 5.41G 19K /mnt/usr/home
newpool/usr/ports 19K 5.41G 19K /mnt/usr/ports
newpool/usr/src 19K 5.41G 19K /mnt/usr/src
newpool/var 139K 5.41G 19K /mnt/var
newpool/var/audit 19K 5.41G 19K /mnt/var/audit
newpool/var/crash 19K 5.41G 19K /mnt/var/crash
newpool/var/log 44K 5.41G 44K /mnt/var/log
newpool/var/mail 19K 5.41G 19K /mnt/var/mail
newpool/var/tmp 19K 5.41G 19K /mnt/var/tmp
zroot 524M 26.4G 96K /zroot
zroot/ROOT 522M 26.4G 96K none
zroot/ROOT/default 522M 26.4G 522M /
zroot/tmp 74.5K 26.4G 74.5K /tmp
zroot/usr 384K 26.4G 96K /usr
zroot/usr/home 96K 26.4G 96K /usr/home
zroot/usr/ports 96K 26.4G 96K /usr/ports
zroot/usr/src 96K 26.4G 96K /usr/src
zroot/var 580K 26.4G 96K /var
zroot/var/audit 96K 26.4G 96K /var/audit
zroot/var/crash 96K 26.4G 96K /var/crash
zroot/var/log 103K 26.4G 103K /var/log
zroot/var/mail 96K 26.4G 96K /var/mail
zroot/var/tmp 92.5K 26.4G 92.5K /var/tmp
zfs set mountpoint=/usr newpool/usr
zfs set mountpoint=/tmp newpool/tmp
zfs set mountpoint=/var newpool/var
zfs set mountpoint=none newpool/ROOT
zfs set mountpoin=/ newpool/ROOT/default
zfs set mountpoin=/tmp newpool/tmp
zfs set mountpoin=/usr newpool/usr
zfs set mountpoin=/var newpool/var
Gruß
Matthias
--
"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the universe trying to
produce bigger and better idiots. So far, the universe is winning." --
Rich Cook
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-questions
_______________________________________________
https://lists.freebsd.org/mailman/listinfo/freebsd-questions
Kevin P. Neal
2016-05-04 13:03:32 UTC
Permalink
Post by Sebastian Wolfgarten
Dear all,
cannot open 'newpool/ROOT': dataset does not exist
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot open 'newpool/ROOT': dataset does not exist
So basically, when the pool is not mounted the system says „dataset does not exist“ but when I mount it and try to change the mount point it comes back with „Device busy“. Any ideas on how I am supposed to set the mount point for the root file system (first & last line of the commands listed above)?
There's a middle ground you left out of your analysis.

You can import a pool and not mount it. Use the "-N" option to zfs import.
--
Kevin P. Neal http://www.pobox.com/~kpn/

"Good grief, I've just noticed I've typed in a rant. Sorry chaps!"
Keir Finlow Bates, circa 1998
s***@wolfgarten.com
2016-05-04 13:11:58 UTC
Permalink
Dear Kevin,

thanks a lot for your follow-up. Indeed the -N option solved my dilemma.

Best regards
Sebastian
Post by Kevin P. Neal
Post by Sebastian Wolfgarten
Dear all,
thanks to Matthias I already fixed most of the issues but there is one
thing I cannot fix yet. When trying to set the mount point for the /
cannot open 'newpool/ROOT': dataset does not exist
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot unmount '/mnt': Device busy
cannot open 'newpool/ROOT': dataset does not exist
So basically, when the pool is not mounted the system says „dataset
does not exist“ but when I mount it and try to change the mount point
it comes back with „Device busy“. Any ideas on how I am supposed to
set the mount point for the root file system (first & last line of the
commands listed above)?
There's a middle ground you left out of your analysis.
You can import a pool and not mount it. Use the "-N" option to zfs
import.
Loading...