Tyler's Site

Abstract

ZFS is an extremely powerful tool, and one that I do not know enough about. To remedy that, I thought I might do a series of blog posts on ZFS to help me not only learn about it, but make it easy (for me at least) to reference in the future. For those that are unaware of what ZFS is:

ZFS (previously Zettabyte File System) is a file system with volume maangemtn capabilities. It began as part of the Sun Amicrosystems Solaris operating system in 2001. Large parts of Solaris, including ZFS, were published under an open source license as OpenSolaris for around 5 years from 2005 before being placed under a closed source license when Oracle Coporation acquired SUn in 2009-2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, including ZFS, to continue its development as an open source project. In 2013, OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems.

Learning Environment

There are a variety of ways to setup a lab to learn about ZFS and ZFS management. For these labs, I am going to be using FreeBSD as ZFS is a tier 1 filesystem which makes the installation process quicker and easier, though the ZFS concepts should largely be the same, but some of the surrounding command (like to list disks detected by the OS) may differ.

Installing ZFS on root with FreeBSD

For anyone that has installed FreeBSD before, ZFS on root is a rather trivial. During the partitioning phase of the installation, there are options for:

The option that we are going with for this post is the Auto (ZFS) option. We are then given an option to change the name for the ZFS pool (zroot is the default). After selecting the ZFS pool name, come the important decisions for the ZFS configuration. Many of these options should be self-explanatory, but the one that we are going to focus on right now is the ‘Pool Type/Disks’ option. Six options are given in the ‘Select Virtual Device type’ menu:

For this post, we are just going to be dealing with stripes and mirrors as those are generally the easiest to grasp; later posts will cover the various types of RAID schemes with ZFS.

Once the virtual device type is selected, the installer will ask which disks will be apart of the pool. Select the desired disks by pressing the spacebar when they are highlighted, changing the empty [ ] next to the disk name to [X]. For a device that only has one drive, using a ‘stripe’ and selecting the one available drive for the pool will suffice. Then continue through the FreeBSD installation as normal.

Replacing failed disk in ZFS mirror

The first lab that I did was to replace a “failed” disk in a ZFS mirror; this lab assumes that the drive is completely dead and unable to be recovered or used in any capacity.

First, find and detach the disk from the storage pool (all commands are run as root!):

# look at the status of the ZFS pools on the system
zpool status
# Remove the dead drive (ada1 in this case) from the pool (zroot) in this case
zpool detach zroot ada1

Once the drive is removed from the storage pool, shutdown the machine and replace the disk (shutting down is not necessary if the storage is hot swappable on the machine). After the dead drive is replaced, reboot the machine (again, all commands are run as root):

# List the disks that are detected in FreeBSD, similar to the `lsblk` command in Linux
geom disk list
# Show the partition scheme of the good disk in the pool for reference
# though, the partition scheme does not necessarily have to stay the same
gpart show
# Create a partition scheme on the drive
# the drive will be `ada1` in this case, and the partition scheme will be `gpt` in this case
gpart create -s gpt ada1
# This example is just copying the same partition scheme of the disk that is still in the pool
# however, that does not have to be the case, and the disk can be partitioned differently
gpart add -t freebsd-boot -s 512k ada1
# partition for swap space
gpart add -t freebsd-swap -s 2G ada1
# replace `disk1` with a disk name that might be appropriate in your environment/situation
# a good place to reference for current names would be under /dev/gpt in this particular case
gpart add -t freebsd-zfs -l disk1 ada1
# attach the new storage disk to the pool
# make sure to put the existing storage disk before the new storage disk
zpool attach zroot ada0 ada1
# This step is to add the required boot code to the disk so it can boot FreeBSD properly
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
# verify the work done above
zpool list

It is also possible to add the disk to the pool before removing the failing/failed disk as ZFS will allow for having 2^64 (18,446,744,073,709,551,616) devices in a zpool. There may be practical reasons for doing one way over the other, but both ways are entirely possible and valid.

Migrating to bigger drive

Another common task with storage drives is upgrading to a larger drive. ZFS also makes this easy by allowing us to attach the larger drive to the smaller one in a mirror, then after the data migration has occurred between the two removing the smaller drive. This process is highly similar to the one above; the main difference is making sure that zpool autoexpand is set to on. The example below is assuming only one drive, but modifying the steps for multiple drives should be quite easy to figure out.

# Make sure to enable `autoexpand` for the pool.
# This will allow ZFS to automatically utilize more drive space once the smaller drives are detached from the pool
zpool autoexpand=on zroot # the example pool is `zpool` here, change appropriately
geom disk list
gpart show # again only referring to the current partitioning scheme
# ada1 will again be the new disk and ada0 will be the original disk in this example
gpart create -s gpt ada1
gpart add -t freebsd-boot -s 512k ada1
gpart add -t freebsd-swap -s 2G ada1
gpart add -t freebsd-zfs -l disk1 ada1
# attach the drive to the zpool, remembering to put the original drive first and the new one second
zpool attach zroot ada ada1
# after the resilvering is done, don't forget to add the boot code otherwise the system will not boot
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
# verify the work with
zpool list

To do this with a ZFS mirror, just replace one disk as if it failed. After the new drive is in and has replaced the first disk, repeat the process with the second.

Other Resources for ZFS

I am still a bit of a novice with ZFS and its extremely powerful tools and ways of handing storage issues. Because of that I am using a lot of resources to attempt to teach myself the basics to begin to feel comfortable enough to play with it and come up with something novel. In the meantime, I thought I would share some sources of knowledge that might be helpful to another ZFS novice.