http://www.howtoforge.com/a-beginners-guide-to-btrfs
This guide shows how to work with the btrfs file system on Linux. It covers creating and mounting btrfs file systems, resizing btrfs file systems online, adding and removing devices, changing RAID levels, creating subvolumes and snapshots, using compression and other things. btrfs is still marked as experimental, but all those features make it a very interesting and flexible file system that should be taken into consideration when you look for the right file system.
I do not issue any guarantee that this will work for you!
A note for Ubuntu users:
Because we must run all the steps from this tutorial with root privileges, we can either prepend all commands in this tutorial with the string sudo, or we become root right now by typing
To create a btrfs file system on /dev/sdb, /dev/sdc, and /dev/sdd, we simply run:
To get details about your filesystem, you can use...
Run...
You can mount a btrfs file system with lzo compression as follows:
In /etc/fstab, this would look as follows:
To decrease our /mnt volume by 2GB, we run:
To delete an intact hard drive, e.g. /dev/sdc, from the btrfs file system online, you can simply run:
While...
To remove a failed hard drive, unmount the file system first:
If you have to add a replacement drive (e.g. /dev/sdf), do it as follows:
The RAID level of a btrfs file system can also be changed online.
Let's assume we're using RAID0 for data and metadata and want to change
to RAID1, this can be done as follows:
To create the subvolume /mnt/sv1 in the /mnt volume, we run:
For example, to mount the subvolume with the ID 266 (which we created in the last chapter) to the /mnt directory, first unmount the top-level volume...
To mount the default volume again, unmount /mnt...
If you want to make the subvolume with the subvolid 266 the default volume (so that you can mount it without any parameters), just run...
If you've changed the default subvolume and want to mount the top-level volume again, you must either use the subvolid 0 with the mount command...
... shouldn't list the deleted subvolume anymore:
Let's create some test files in our /mnt/sv1 subvolume:
For example, to take a snaptshot of the file /mnt/sv1/test1, you can run:
For non-system partitions, this can be done without a reboot. In this example, I want to convert my ext4 partition /dev/sdb1 (mounted to /mnt) to btrfs:
First unmount the partition and run a file system check:
The conversion should have created an ext2_saved subvolume with an image of the original partition:
Unmount the partition...
This guide shows how to work with the btrfs file system on Linux. It covers creating and mounting btrfs file systems, resizing btrfs file systems online, adding and removing devices, changing RAID levels, creating subvolumes and snapshots, using compression and other things. btrfs is still marked as experimental, but all those features make it a very interesting and flexible file system that should be taken into consideration when you look for the right file system.
I do not issue any guarantee that this will work for you!
1 Preliminary Note
I'm using an Ubuntu 12.10 system here with four additional, yet unformatted hard drives (/dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde). I will use these four hard drives to demonstrate btrfs usage.A note for Ubuntu users:
Because we must run all the steps from this tutorial with root privileges, we can either prepend all commands in this tutorial with the string sudo, or we become root right now by typing
sudo su
2 Installing btrfs-tools
Before we start using btrfs, we must install the btrfs-tools package:
apt-get install btrfs-tools
3 Creating btrfs File Systems (RAID0, RAID1)
One great feature of btrfs is that you can create btrfs file systems on unformatted hard drives, i.e., you don't have to use tools like fdisk to partition a hard drive.To create a btrfs file system on /dev/sdb, /dev/sdc, and /dev/sdd, we simply run:
mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd
Without any further switches, this file system uses RAID0 for data
(non-redundant) and RAID1 for metadata (redundant). When data is lost
for some reason (e.g. failed sectors on your hard drive), btrfs can use
metadata for trying to rebuild that data.
root@server1:~# mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd
WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
adding device /dev/sdc id 2
adding device /dev/sdd id 3
fs created label (null) on /dev/sdb
nodesize 4096 leafsize 4096 sectorsize 4096 size 15.00GB
Btrfs Btrfs v0.19
root@server1:~#
If you want to use btrfs with just one hard drive and don't want
metadata to be redundant (attention: this is dangerous - if your
metadata is lost, your data is lost as well), you'd use the -m single switch (-m refers to metadata, -d to data):WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
adding device /dev/sdc id 2
adding device /dev/sdd id 3
fs created label (null) on /dev/sdb
nodesize 4096 leafsize 4096 sectorsize 4096 size 15.00GB
Btrfs Btrfs v0.19
root@server1:~#
mkfs.btrfs -m single /dev/sdb
If you want to do the same with multiple hard drives (i.e., non-redundant metadata), you'd use -m raid0 instead of -m single:
mkfs.btrfs -m raid0 /dev/sdb /dev/sdc /dev/sdd
If you want data to be redundant and metadata to be non-redundant, you'd use the following command:
mkfs.btrfs -m raid0 -d raid1 /dev/sdb /dev/sdc /dev/sdd
If you want both data and metadata to be redundant, you'd use this
command (RAID1 is the default for metadata, that's why we don't have to
specify it here):
mkfs.btrfs -d raid1 /dev/sdb /dev/sdc /dev/sdd
It is also possible to use RAID10 (-m raid10 or -d raid10),
but then you need at least four hard drives. For RAID1, you need at
least two hard drives, but it is not important that both drives have
exactly the same size (which is another great thing about btrfs). To get details about your filesystem, you can use...
btrfs filesystem show /dev/sdb
... which is equivalent to...
btrfs filesystem show /dev/sdc
... and...
btrfs filesystem show /dev/sdd
... because you can use any hard drive which is part of the btrfs file system.
root@server1:~# btrfs filesystem show /dev/sdb
failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 3 FS bytes used 28.00KB
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
To get a list of all btrfs file systems, just leave out the device: failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 3 FS bytes used 28.00KB
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
btrfs filesystem show
root@server1:~# btrfs filesystem show
failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 3 FS bytes used 28.00KB
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 3 FS bytes used 28.00KB
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
4 Mounting btrfs File Systems
Our btrfs file system can now be mounted like this:
mount /dev/sdb /mnt
Again, this is equivalent to...
mount /dev/sdc /mnt
... and:
mount /dev/sdd /mnt
In your /etc/fstab, this would look as follows (if you want to have the file system mounted automatically at boot time):
vi /etc/fstab
[...] /dev/sdb /mnt btrfs defaults 0 1 [...] |
df -h
... to see your new file system:
root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 15G 56K 10G 1% /mnt
root@server1:~#
The command...Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 15G 56K 10G 1% /mnt
root@server1:~#
btrfs filesystem df /mnt
... gives you some more details about your data and metadata (e.g. RAID levels):
root@server1:~# btrfs filesystem df /mnt
Data, RAID1: total=1.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00
root@server1:~#
Data, RAID1: total=1.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00
root@server1:~#
5 Using Compression With btrfs
btrfs file systems can make use of zlib (default) and lzo compression which means that compressible files will be stored in compressed form on the hard drive which saves space. zlib has a higher compression ratio while lzo is faster and takes less cpu load. Using compression, especially lzo compression, can improve the throughput preformance. Please note that btrfs will not compress files that have already been compressed ar application level (such as videos, music, images, etc.).You can mount a btrfs file system with lzo compression as follows:
mount -o compress=lzo /dev/sdb /mnt
For zlib compression, you'd either use...
mount -o compress=zlib /dev/sdb /mnt
... or...
mount -o compress /dev/sdb /mnt
... since zlib is the default compression algorithm. In /etc/fstab, this would look as follows:
vi /etc/fstab
[...] /dev/sdb /mnt btrfs defaults,compress=lzo 0 1 [...] |
6 Rescuing A Dead btrfs File System
If you have a dead btrfs file system, you can try to mount it with the recovery mount option which will try to seek for a usable copy of the tree root:
mount -o recovery /dev/sdb /mnt
7 Resizing btrfs File Systems Online
btrfs file systems can be resized online, i.e., there's no need to unmount the partition or to reboot into a rescue system.To decrease our /mnt volume by 2GB, we run:
btrfs filesystem resize -2g /mnt
(Instead of g for GB, you cam also use m for MB, e.g.
btrfs filesystem resize -500m /mnt
)
root@server1:~# btrfs filesystem resize -2g /mnt
Resize '/mnt' of '-2g'
root@server1:~#
Let's take a look at our /mnt partition...Resize '/mnt' of '-2g'
root@server1:~#
df -h
... and we should see that it has a size of 13GB instead of 15GB:
root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 13G 312K 10G 1% /mnt
root@server1:~#
To increase the /mnt partition by 1GB, run:Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 13G 312K 10G 1% /mnt
root@server1:~#
btrfs filesystem resize +1g /mnt
df -h
root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 14G 312K 10G 1% /mnt
root@server1:~#
To increase the partition to the max. available space, run:Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 14G 312K 10G 1% /mnt
root@server1:~#
btrfs filesystem resize max /mnt
df -h
root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 15G 312K 10G 1% /mnt
root@server1:~#
Filesystem Size Used Avail Use% Mounted on
udev 489M 4.0K 489M 1% /dev
tmpfs 200M 308K 199M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 498M 0 498M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/mapper/server1-root 27G 1.1G 25G 5% /
/dev/sda1 228M 29M 188M 14% /boot
/dev/sdb 15G 312K 10G 1% /mnt
root@server1:~#
8 Adding/Deleting Hard Drives To/From A btrfs File System
Now we want to add /dev/sde to our btrfs file system. While the file system is mounted to /mnt, we simply run:
btrfs device add /dev/sde /mnt
Let's take a look at the file system afterwards:
btrfs filesystem show /dev/sdb
root@server1:~# btrfs filesystem show /dev/sdb
failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 4 FS bytes used 156.00KB
devid 4 size 5.00GB used 0.00 path /dev/sde
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
As you see, /dev/sde has been added, but
no space is being used on that device. If you are using a RAID level
other than 0, you should now do a filesystem balance so that data and
metadata get spread over all four devices:failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 4 FS bytes used 156.00KB
devid 4 size 5.00GB used 0.00 path /dev/sde
devid 3 size 5.00GB used 1.01GB path /dev/sdd
devid 2 size 5.00GB used 1.01GB path /dev/sdc
devid 1 size 5.00GB used 2.02GB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
btrfs filesystem balance /mnt
(Another syntax for the same command would be:
btrfs balance start /mnt
)
root@server1:~# btrfs filesystem balance /mnt
Done, had to relocate 5 out of 5 chunks
root@server1:~#
Let's take a look at our file system again:Done, had to relocate 5 out of 5 chunks
root@server1:~#
btrfs filesystem show /dev/sdb
root@server1:~# btrfs filesystem show /dev/sdb
failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 4 FS bytes used 28.00KB
devid 4 size 5.00GB used 512.00MB path /dev/sde
devid 3 size 5.00GB used 32.00MB path /dev/sdd
devid 2 size 5.00GB used 512.00MB path /dev/sdc
devid 1 size 5.00GB used 36.00MB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
As you can see, data/metadata has been moved to /dev/sde.failed to read /dev/sr0
Label: none uuid: 21f33aaa-b2b3-464b-8cf1-0f8cc3689529
Total devices 4 FS bytes used 28.00KB
devid 4 size 5.00GB used 512.00MB path /dev/sde
devid 3 size 5.00GB used 32.00MB path /dev/sdd
devid 2 size 5.00GB used 512.00MB path /dev/sdc
devid 1 size 5.00GB used 36.00MB path /dev/sdb
Btrfs Btrfs v0.19
root@server1:~#
To delete an intact hard drive, e.g. /dev/sdc, from the btrfs file system online, you can simply run:
btrfs device delete /dev/sdc /mnt
(This automatically does a rebalance of data/metadata, if necessary.)While...
btrfs filesystem show /dev/sdb
... still lists /dev/sdc, the output of...
df -h
... shows the reduced size of the file system. To remove a failed hard drive, unmount the file system first:
umount /mnt
Mount it in degraded mode:
mount -o degraded /dev/sdb /mnt
Remove the failed hard drive. If you use a RAID level that requires a
certain number of hard drives (e.g. two for RAID1 and four for RAID10),
you might have to add an intact replacement drive because you cannot go
below the minimum number of required drives. If you have to add a replacement drive (e.g. /dev/sdf), do it as follows:
btrfs device add /dev/sdf /mnt
Only if you are sure you have enough intact drives do you run the following command to complete the replacement:
btrfs device delete missing /mnt
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt
10 Creating Subvolumes
With btrfs, we can create subvolumes in volumes or other subvolumes, and we can take snapshots of these subvolumes or mount subvolumes instead of the top-level volume.To create the subvolume /mnt/sv1 in the /mnt volume, we run:
btrfs subvolume create /mnt/sv1
This subvolume looks like a normal directory...
ls -l /mnt
root@server1:~# ls -l /mnt/
total 0
drwxr-xr-x 1 root root 0 Nov 21 16:06 sv1
root@server1:~#
... but it's a subvolume of /mnt (with the suvolid 265 in this case):total 0
drwxr-xr-x 1 root root 0 Nov 21 16:06 sv1
root@server1:~#
btrfs subvolume list /mnt
root@server1:~# btrfs subvolume list /mnt
ID 265 top level 5 path sv1
root@server1:~#
To create a subvolume of a subvolume (e.g. /mnt/sv1/sv12), run: ID 265 top level 5 path sv1
root@server1:~#
btrfs subvolume create /mnt/sv1/sv12
The command...
btrfs subvolume list /mnt
... lists now also the new subvolume:
root@server1:~# btrfs subvolume list /mnt
ID 265 top level 5 path sv1
ID 266 top level 5 path sv1/sv12
root@server1:~#
ID 265 top level 5 path sv1
ID 266 top level 5 path sv1/sv12
root@server1:~#
11 Mounting Subvolumes
When you mount the top-level volume, this also mounts any subvolume automatically. But with btrfs, it is also possible to mount a subvolume instead of the top-level volume.For example, to mount the subvolume with the ID 266 (which we created in the last chapter) to the /mnt directory, first unmount the top-level volume...
umount /dev/sdb
... and then mount the subvolume like this:
mount -o subvolid=266 /dev/sdb /mnt
(Instead of the subvolid, you can also use its name from the btrfs subvolume list /mnt output:
mount -o subvol=sv1/sv12 /dev/sdb /mnt
) To mount the default volume again, unmount /mnt...
umount /dev/sdb
... and run the mount command like this:
mount /dev/sdb /mnt
This is in fact equivalent to the command...
mount -o subvolid=0 /dev/sdb /mnt
... because the top-level volume has the subvolid 0. If you want to make the subvolume with the subvolid 266 the default volume (so that you can mount it without any parameters), just run...
btrfs subvolume set-default 266 /mnt
... and then unmount/mount again:
umount /dev/sdb
mount /dev/sdb /mnt
Now the subvolume with the ID 266 is mounted to /mnt instead of the top-level volume. mount /dev/sdb /mnt
If you've changed the default subvolume and want to mount the top-level volume again, you must either use the subvolid 0 with the mount command...
umount /dev/sdb
mount -o subvolid=0 /dev/sdb /mnt
... or make the top-level volume the default one again:mount -o subvolid=0 /dev/sdb /mnt
btrfs subvolume set-default 0 /mnt
Then unmount/mount again:
umount /dev/sdb
mount /dev/sdb /mnt
mount /dev/sdb /mnt
12 Deleting Subvolumes
Subvolumes can be deleted using their path while they are mounted. For example, the subvolume /mnt/sv1/sv12 can be deleted as follows:
btrfs subvolume delete /mnt/sv1/sv12
The command...
btrfs subvolume list /mnt
root@server1:~# btrfs subvolume list /mnt
ID 265 top level 5 path sv1
root@server1:~#
ID 265 top level 5 path sv1
root@server1:~#
13 Creating Snapshots
One of the most useful btrfs features is that you can create snapshots of subvolumes online. This can be useful for doing rollbacks or creating consistent backups.Let's create some test files in our /mnt/sv1 subvolume:
touch /mnt/sv1/test1 /mnt/sv1/test2
Now we take a snapshot called /mnt/sv1_snapshot of the /mnt/sv1 subvolume:
btrfs subvolume snapshot /mnt/sv1 /mnt/sv1_snapshot
If everything went well, we should find our test files in the snapshot as well:
ls -l /mnt/sv1_snapshot
root@server1:~# ls -l /mnt/sv1_snapshot
total 0
-rw-r--r-- 1 root root 0 Nov 21 16:23 test1
-rw-r--r-- 1 root root 0 Nov 21 16:23 test2
root@server1:~#
total 0
-rw-r--r-- 1 root root 0 Nov 21 16:23 test1
-rw-r--r-- 1 root root 0 Nov 21 16:23 test2
root@server1:~#
14 Taking Snapshots Of Files
With btrfs, it's even possible to take a snapshot of a single file.For example, to take a snaptshot of the file /mnt/sv1/test1, you can run:
cp --reflink /mnt/sv1/test1 /mnt/sv1/test3
As long as the contents of /mnt/sv1/test1 doesn't change, the snapshot /mnt/sv1/test3 will not take up any space! Only if the original file /mnt/sv1/test1 is modified, will the original contents be copied to the snapshot /mnt/sv1/test3.15 Defragmentation
To defragment a btrfs file system, you can run:
btrfs filesystem defrag /mnt
Please note that this command is useful only on normal hard drives, not on solid state disks (SSDs)! 16 Converting An ext3/ext4 File System To btrfs
It is possible to convert an ext3 or ext4 file system to btrfs (and also to do a rollback). To do this for your system partition, you need to boot into a rescue system - for Ubuntu 12.10, I've written a tutorial about this: How To Convert An ext3/ext4 Root File System To btrfs On Ubuntu 12.10For non-system partitions, this can be done without a reboot. In this example, I want to convert my ext4 partition /dev/sdb1 (mounted to /mnt) to btrfs:
First unmount the partition and run a file system check:
umount /mnt
fsck -f /dev/sdb1
Then do the conversion as follows:
btrfs-convert /dev/sdb1
root@server1:~# btrfs-convert /dev/sdb1
creating btrfs metadata.
creating ext2fs image file.
cleaning up system chunk.
conversion complete.
root@server1:~#
That's it - you can now mount the btrfs partition:creating btrfs metadata.
creating ext2fs image file.
cleaning up system chunk.
conversion complete.
root@server1:~#
mount /dev/sdb1 /mnt
The conversion has created an ext2_saved subvolume with an image of the original partition:
btrfs subvolume list /mnt
root@server1:~# btrfs subvolume list /mnt
ID 256 top level 5 path ext2_saved
root@server1:~#
If you want to do a rollback, you must keep that subvolume. Otherwise, you can delete it to free up some space:ID 256 top level 5 path ext2_saved
root@server1:~#
btrfs subvolume delete /mnt/ext2_saved
16.1 Doing A Rollback To ext3/ext4
Let's assume you're not happy with the result - this is how you can roll back to the original file system (ext3 or ext4):The conversion should have created an ext2_saved subvolume with an image of the original partition:
btrfs subvolume list /mnt
root@server1:~# btrfs subvolume list /mnt
ID 256 top level 5 path ext2_saved
root@server1:~#
This image will be used to do the rollback.ID 256 top level 5 path ext2_saved
root@server1:~#
Unmount the partition...
umount /mnt
... then do the rollback...
btrfs-convert -r /dev/sdb1
... and finally mount the original partition again:
mount /dev/sdb1 /mnt
No comments:
Post a Comment