Tyler's Site

Abstract

The Logical Volume Manager (LVM) is a framework that provides logical volume management.

Surprise Pikachu

Though the description for logical volume management is on another Wikipedia page.

In computer storage, logical volume management or LVM provides a method of allocating space on mass-storgae devices that is more flexible than conventional partitioning schemes to store volumes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in gneeral) into larger virtual partitions that administrators can re-size or move, potentially without interrupting system use.

In simpiler terms, logical volume management allows software to create a virtualized partitioning scheme made from one or many partitions, disks, or other block devices. Doing this allows the virtual file system to comine them in many different any interesting ways such as RAID-1 (mirroring), RAID-0 (striping), and other RAID-like schemes. Linux’s LVM also allows for adding, removing, or resizing partitions on a live system, though the file system itself will have to be resized. In general, LVM offers many of the same features that next generation copy on write (CoW) file systems, such as ZFS, btrfs, and bcachefs offer. The main downsides to LVM over something like ZFS or btrfs is that LVM management can be more complex the equivalent configurations on other CoW file systems, as well as LVM is generally not going to perform as well as CoW file systems. This is because LVM is a layer that sits on top of the file system, rather than the file system providing the management directly. With those points in mind, let’s setup an environment to start working with LVM.

Lab Setup

For this lab, I am using a KVM virtual machine running Void Linux, though distro shouldn’t matter too much. The VM’s root drive is 30GB, then virtual disks will be added or removed depending on what is needed for the particular section. I am not going to go over the specifics of the Void Linux installation, if you are wanting to follow along using Void Linux and need help, you can check out their installation tutorials:

It is also important to load (at least on Void Linux) the proper kernel modules; the ones I have had to make sure are loaded for things to work properly are:

This page in the Void Linux handbook explains how to automatically load kernel modules on Void Linux.

Setting up LVM in a RAID-like configuration

LVM gives the ability to setup software RAID on multiple partitions or disks. For this example, I am going to be using disks /dev/sda and /dev/sdb as the disks on the test machine; I am also going to setup a mirror, however, I will also explain how to setup a stripe or other kinds of RAID.

The first step of the process is to partition the disk(s), and while this isn’t a hard requirement, it does make managing the pool easier later on. If we were to add the raw disks to the pool without partitioning them, other systems may not pick up on the LVM pool and could overwrite the data on the disk(s). So, first we will create one partition on each disk that takes the entirety of the space available on the disk. There are many different tools that can do this, I find the simplest one to be cfdisk, but use what you are comfortable with.

# Run the following commands as root

# Opens /dev/sda in cfdisk allowing for partitioning
cfdisk /dev/sda
# Opens /dev/sdb in cfdisk allowing for partitioning
cfdisk /dev/sdb

Then create a Physical Volume (PV); this step allows for creating a Volume Group (VG) using the devices, as well as writing some meta data to the group that labels it as an LVM group. Creating a PV and a VG is very straightforward and will only take two commands. The first one will initialize the PV using partitions /dev/sda1 and /dev/sdb1, then we will create a volume group named STORAGE with the same partitions. That process looks as follows:

# Run as root
pvcreate /dev/sda1 /dev/sdb1
vgcreate STORAGE /dev/sda1 /dev/sdb1

Now that the volume group is initialized, we can get onto creating a usable file system that can be mounted and store data. The command that does this is lvcreate. This command is very powerful and can cover a lot of different things that an admin might want to do with their storage pool, however, this blog post will only cover very basic use of this command to get setup quickly, however, more options are available by looking at the man page.

# Run as root

# The following command will create a stripe of the two disks in the VG 'STORAGE',
# and uses the '-n' flag to name it 'storage_volume'. The '-l' flag is neat as it
# specifies what percentage of the disk should be used rather than how many gigabytes.
# This example will use 100% of the avialable disk space (should be around 80G).
lvcreate -l 100%FREE -n storage_volume STORAGE

# This command will create a mirror (RAID-1) of the two disks in the same VG
# and with the same name as the command above. However, because this is a RAID-1
# The available disk space will be 40G rather than 80G because of the mirror.
lvcreate -l 100%FREE --type raid1 -n storage_volume STORAGE

# Then create a file system in the storage volume so that we can mount it on our system
mkfs.ext4 /dev/mapper/STORAGE-storage_volume

There are a few things worth noting at this point. The first thing worth mentioning is the difference between the -l and the -L flags. The uppercase (-L) allows administrators to specify the size of the pool by setting its size in gigabytes; while the lowercase (-l) allows setting the size of the pool based on a percentage of the disk. Take the following commands:

# This command will set the available storage capacity to 35 Gigabytes.
# Attempting to allocate more space than is available will cause lvcreate
# to fail with error 5.
lvcreate -L 35G -n --type raid1 -n storage_volume STORAGE

# This command will allocate 90% of the available space in the storage pool
# Which is about 36G in our 40G disks.
lvcreate -l 90%FREE -n --type raid1 -n storage_volume STORAGE

Another point worth noting is the various types of RAID that are available using lvcreate. According to the LVMRAID man page, LVM supports the following RAID types when using the --type flag:

Finally, we can check the status of the volume groups using the lvs command. Simply running lvs will give a small amount of information about the volume group, but to get more details on the entire group and its devices run the following:

# Run as root
lvs -a -o +device
LV                        VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  storage_volume            STORAGE rwi-a-r--- <36.00g                                    15.17            storage_volume_rimage_0(0),storage_volume_rimage_1(0)
  [storage_volume_rimage_0] STORAGE iwi-aor--- <36.00g                                                     /dev/sda1(1)
  [storage_volume_rimage_1] STORAGE Iwi-aor--- <36.00g                                                     /dev/sdb1(1)
  [storage_volume_rmeta_0]  STORAGE ewi-aor---   4.00m                                                     /dev/sda1(0)
  [storage_volume_rmeta_1]  STORAGE ewi-aor---   4.00m                                                     /dev/sdb1(0)

This will show 1. all logical volumes (-a) and show each device (-o +device).

Replacing Disks and Resizing Partitions

It’s great that the logical volumes are setup now, but what happens if we need to replace a disk? Either because the previous disk failed, or is being upgraded to a larger disk. Thankfully, LVM allows for that; in this scenario, let’s imagine that one of our 40G disks died, and is being replaced with a 50G disk. My current lvs output:

# Run as root
lvs -a -o +device
WARNING: Couldn't find device with uuid HLdFpC-sWym-TnKZ-TiyQ-E9A4-e5P7-3iNpO0.
  WARNING: VG MIRROR is missing PV HLdFpC-sWym-TnKZ-TiyQ-E9A4-e5P7-3iNpO0 (last written to /dev/sdb1).
  LV                        VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  storage_volume            STORAGE rwi-aor-p- <38.00g                                    100.00           raid_rimage_0(0),raid_rimage_1(0)
  [storage_volume_rimage_0] STORAGE iwi-aor--- <38.00g                                                     /dev/sda1(1)
  [storage_volume_rimage_1] STORAGE Iwi-aor-p- <38.00g                                                     [unknown](1)
  [storage_volume_rmeta_0]  STORAGE ewi-aor---   4.00m                                                     /dev/sda1(0)
  [storage_volume_rmeta_1]  STORAGE ewi-aor-p-   4.00m                                                     [unknown](0)

As we can see, one of the devices on the mirror is missing. Thankfully, I put in a 50G drive that can replace it, so let’s get that process started. First partition the drive using your favorite tool (cfdisk for me) to create one large partition on the disk. Then run:

# Run as root
vgreduce --removemissing STORAGE --force

This will remove the device from the volume group, but will leave the group in state in which it needs to be refreshed. We will do this after adding the new device:

# Run as root

# First create a physical volume for the new device
pvcreate /dev/sdc1
# Then add the new device to the previous volume group
vgextend STORAGE /dev/sdc1
# Finally repair the volume using the lvconvert command
lvconvert --repair /dev/mapper/STORAGE-storage_volume
# Allow a few minutes for LVM to copy all of the data over to the new disk
# before replacing the old one. The status of the copy can be seen by running:
lvs -a
 LV                        VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  storage_volume            STORAGE rwi-aor--- <36.00g                                    4.83
  [storage_volume_rimage_0] STORAGE iwi-aor--- <36.00g
  [storage_volume_rimage_1] STORAGE Iwi-aor--- <36.00g
  [storage_volume_rmeta_0]  STORAGE ewi-aor---   4.00m
  [storage_volume_rmeta_1]  STORAGE ewi-aor---   4.00m

# The Cpy%Sync section describes how much data has been synced between the two disks
# once that is at 100, the first disk is safe to remove

Now the larger disk should be a part of the mirror just as the previous 40G disk was. Let’s repeat the process for the other 40G to have the volume group on two 50G disks rather than two 40G ones.

After the 40G disks have been swapped out with 50G ones, let’s resize the volume group to utilize more space. This is a fairly straightforward step; additionally the lvextend utility even includes a flag to resize the file system itself. To do this run:

# Run as root

# Note the '+' between the -l and the 90
lvextend -l+90%FREE -r STORAGE

NOTE: In the above command, the 90%FREE will not allocate 90% of the total disk size, but rather will add 90% of the available disk space to the current volume.

At the end of it, my lvs output looks as follows:

# Run as root
lvs -a -o +devices
 LV                        VG      Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  storage_volume            STORAGE rwi-aor--- 48.59g                                    100.00           storage_volume_rimage_0(0),storage_volume_rimage_1(0)
  [storage_volume_rimage_0] STORAGE iwi-aor--- 48.59g                                                     /dev/sdb1(1)
  [storage_volume_rimage_1] STORAGE iwi-aor--- 48.59g                                                     /dev/sdc1(1)
  [storage_volume_rmeta_0]  STORAGE ewi-aor---  4.00m                                                     /dev/sdb1(0)
  [storage_volume_rmeta_1]  STORAGE ewi-aor---  4.00m                                                     /dev/sdc1(0)

Resources