Abstract
Another post on LVM, this time going over snapshots and backups. This is a territory that many next generation file systems, especially ZFS excel in. There is an argument to be made for using ZFS rather than using LVM, however, that is not a debate that will be made in this post. This one is simply going over how to use LVM for those things.
Lab Setup
This lab will be run on a Void Linux VM similarly to the lab in my
last post. The root drive for the VM is 30G, however, this time the LVM
drive is only 5G, as we just need enough to prove the concept.
Additionally, before beginning make sure to load the
dm-snapshot
kernel module. This can be done by running
modprobe dm-snapshot
as root. It can also be done
automatically by putting the following line in the file:
/etc/modules-load.d/dm-snapshot.conf
dm-snapshot
According to the Void
Linux Handbook; putting that same file in
/etc/modprobe.d
should have the same effect, but it did not
work in my testing.
Basic LVM Setup
Be sure to partition the drive(s) first, this can be done either
using cfdisk
or fdisk
with cfdisk
being the easier one.
# Run as root
# Create physical volume on the drive
pvcreate /dev/sda1
# Create volume group with the drive
vgcreate DRIVE /dev/sda1
# Allocate 75% of the free space in volume group 'DRIVE'
lvcreate 75%FREE -n D01 DRIVE
# Create file system on volume group allocation
mkfs.ext4 /dev/mapper/DRIVE-D01
Now that we have the volume group made, let’s see how snapshots and backups work within LVM. As a general note before getting into the commands, it is generally much easier to prove that these commands have worked if there is some data on the volume group. That way when we delete the volume group and restore the snapshot on a new volume group, we can verify that the data was restored as it was on the original volume.
Creating the snapshot
LVM has two kinds of snapshots, a Copy on Write (CoW) snapshot, and a thin snapshot. The thinkk Rather, this post will be going over the CoW snapshots. The CoW snapshots are created when a size for the snapshot is specified in the snapshot creation command. From the lvcreate man page:
COW (Copy On Write) snapshots are create when a size is specified. The size is allocated from space in the (volume group), and is the amount of space that can be used for saving COW blocks as writes occur to the origin or snapshot. The size chosen should depend upon the amount of writes that are expected; often 20% of the origin (logical volume) is enough. If COW space runs low, it can be extended with lvextend (shrinking is also allowed with lvreduce.) A small amount of the COW snapshot LV size is used to track COW block locations, so the full size is not available for COW data blocks. Use lvs to check how much space is used, and see –monitor to to automatically extend the size to avoid running out of space.
The following command will create a 500 Megabyte copy on write snapshot of the previously created logical volume.
# Run as root
lvcreate --snapshot --size 500M --name backup01 DRIVE/D01
We can see the snapshot was taken by running lvs -a
as
root:
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
BK01 DRIVE swi-a-s--- 500.00m D01 0.01
D01 DRIVE owi-aos--- <3.75g
The size specified in the snapshot is not relative to the size of the storage pool, but rather describes the amount of changed data the snapshot can store. So, in this example, 500 megabytes of data can be modified on the storage pool, and once the snapshot storage is full, it will cease to be able to keep up with modified data on the main pool. The recommended snapshot size is 20% of the storage pool’s available space; I modified the above snapshot command to change to that value:
# Run as root
lvcreate --snapshot -l 20%ORIGIN --name backup01 DRIVE/D01
Restoring to a previous snapshot
The following commands are how one might go about restoring a logical
volume back to the state of the snapshot backup01
:
# Run as root
lvconvert --merge /dev/mapper/DRIVE-backup01
# Deactive Volume
vgchagne -an
# Re-activate Volume
vgchange -ay
The biggest annoyance with LVM is the requirement to reload the volume group. If the volume group is not the root device for a running operating system, then it can be reloaded using the commands shown above, but if the root file system has to be reverted to a previous snapshot, then it requires that the machine be restarted as you cannot unload a currently running block device.
Backing up the snapshot with
tar
Another point that should be pointed out is that the Arch Wiki page on LVM specifically points out that a copy on write snapshot is not a backup.
A [Copy on Write] snapshot is not a backup, because it does not make a second copy of the original data. For example, a damanged disk secotor that affects original data also affects the snapshots. That said, a snapshot can be helpful while using other tools to make backups, as outlined below.
The way that this issue is generally addressed, appears to be to
mount the snapshot and copy it using something like tar
or dd
. I have included an example below of creating a
backup of the snapshot using tar
:
# Run as root
mkdir /backup/mount
mount /dev/mapper/DRIVE-backup01 /backup/mount
tar -cf backup01.snapshot /backup/mount/*
From there, the snapshot file can be moved to another storage location, and it can act as the backup of the pool.
Restoring a file from backup
Restoring individual files from the backup is fairly straightforward, just extract the snapshot file that was made earlier and grab any files that are required.
Restoring entire backup
Restoring the entire backed up snapshot is a bit of a combination of restoring individual files and restoring to a previous snapshot. Essentially, we create a new snapshot and load the new snapshot with the previous snapshot’s data, then recover it. The process is shown below:
# Run as root
# Creating location to mount the snapshot
mkdir -p /backup/mount
# Creating restoration snapshot, adjust size appropriately
lvcreate --snapshot --size 500M -n restore01 DRIVE/02
# Mount snapshot
mount /dev/mapper/DRIVE-restore /backup/mount
# Extract tar archieve
tar -xf backup01.snapshot
# Restore snapshot
lvconvert --merge /dev/mapper/DRIVE-restore01
# Deactive Volume
vgchagne -an
# Re-activate Volume
vgchange -ay
Deleting snapshot
Deleting a snapshot is as easy as deleting a logical volume, just
make sure to get the name correct and not delete the main LV. The names
can be checked using lvs -a
.
# Run as root
lvremove DRIVE/backup01 DRIVE/D01