ZFS Brief Intro

By Christopher Stone
Published Nov 24, 2010
CC By-SA Licensed
It seems ZFS is being thrown around quite a bit these days as the next generation file system everyone has been looking for. I've looked at it a few times over the last few months and on the surface it seems to live up to the hype.

The other day a disk drive went bad in my server and I figured it was time for a little change. My home 'server' has 3 (remaining) disk drives dedicated to storage in it. As my server runs FreeBSD 8.1 I figured I'd give ZFS a try.

After a bit of Googling I found a Getting Started article on the OpenSolaris website. The article is short on details, but for those wanting a quick start, it's the right reference. The core concepts of ZFS are pretty easy. You have a bunch of disks, which can hold data. ZFS organizes these disks into pools. You can also use an intermediary layer to add redundancy, primarily through a mirror (RAID1) and raidz (akin to RAID5). There are other layers available, but omitted for simplicity.

Once you have the storage pool you can add ZFS file systems to it. This is where things get a bit tricky. The ZFS file system in integral to the pool system, so you can't really separate them. This also means that the initial ZFS file system is automatically created when you create the storage pool.

Technical jargon: Storage pools are called zpools. The disks and redundancy layers discussed above are all called vdevs. A redundancy vdev (such as mirror) is backed by other vdevs, usually disks, but you could also use other redundancy vdevs as well. There are also special pseudo-vdevs, but are omitted from this for simplicity.
Before starting creating anything, if you are commiting to using ZFS at all you should edit your rc.conf file and include zfs_enable="YES" so that any file systems you want to mount at boot time will. If you forget this step your new file systems will not mount at boot and you will have to issue a zfs mount -a command to mount them.

I have three disks that I want to organize into a single storage pool. I don't have a complicated server, so a single storage pool will be sufficient for me. I want to use all three disks in a single raidz vdev:
root@aislynn# zpool create tank raidz /dev/ad4 /dev/ad5 /dev/ad6
The syntax of the zpool create command is zpool create [pool_name] [vdev]+. When the vdev is a redundant device, it takes the backing vdevs as arguments to it, raidz [vdevs]+.
The zpool utility is used for creating, destroying, and maintaining zpools. As you can see above, it's called with the create argument, followed by the name of the new pool. I've used the pool name "tank" because it's a commonly used name; the name "pooln" (where n is a number) is also common. Remember other people may have to maintain the system after you, it's best not to get too creative with the pool names.

If you did not receive any errors from the create command, you should be able to view the status of your newly created pool with the following:
root@aislynn# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad4     ONLINE       0     0     0
            ad5     ONLINE       0     0     0
            ad6     ONLINE       0     0     0

errors: No known data errors
Above it says that I have a raidz1 vdev in my pool. When I specified raidz in my create command, it was actually an alias for raidz1, which is a raidz with single parity (one 'disk' worth of space used for parity).
Once you have the pool configured it will automatically be mounded in a directory created with the same name in the root folder. In this case /tank will be created and the file system mounted there.

ZFS allows you to create multiple file systems within the same storage pool. It's generally best practice to create a new file system for each administrative unit (traditionally mount points) you would normally use. In my case I'll be replacing /storage (where the old array used to be mounted) and /usr/home with ZFS file systems stored on the zpool I just created. Creating them is as easy as invoking the ZFS utility:
root@aislynn# zfs create tank/storage
root@aislynn# zfs create tank/home
As you might guess, these files systems will automatically be mounted in folder created just for them. Running a quick df should confirm this:
root@aislynn# df
Filesystem        1G-blocks Used Avail Capacity  Mounted on
tank                   200G   0G  200G       0%  /tank
tank/storage           200G   0G  200G       0%  /tank/storage
tank/home              200G   0G  200G       0%  /tank/home
Be careful when looking at this information. df is not aware that ZFS shares its storage pool, so it will report the total pool amounts for each file system (unless quotas and reservations are set, more on that later). Other mount points omitted.
These mount points are inconvenient, it would be much easier if the file systems were mounted back to the same point as before. Additionally I have no use for the tank file system being mounted as it simply serves as the ZFS root. You can easily change the mount point of the file system, and whether the file system get mounted at all. Each file system has a set of properties (which are stored as meta data right in ZFS, no messy configuration files to edit). I made my changes as follows:
root@aislynn# zfs set mountpoint=none tank
root@aislynn# zfs set mountpoint=/storage tank/storage
root@aislynn# zfs set mountpoint=/usr/home tank/home
I made sure the mount points were ready for the file systems before doing this. Normally /usr/home has all my home directories in it, I just moved the existing folder to home.old, created a new folder for the mount point, and copied everything over.
As always, a quick verify to be sure everything is going as expected:
root@aislynn# df
Filesystem        1G-blocks Used Avail Capacity  Mounted on
tank/storage           200G   0G  200G       0%  /storage
tank/home              200G   0G  200G       0%  /usr/home
The last thing I wanted to do in getting my first attempt going was to set quotas and reservations for some of my file systems. As with so many other things in ZFS, this was easy:
root@aislynn# zfs set quota=20G tank/home
root@aislynn# zfs set reservation=10G tank/home
What this is telling ZFS is that my home file system is limited to 20GB maximum, and that it reserves 10GB minimum for it's own use. These quotas and reservations show up in df as you might expect:
root@aislynn# df
Filesystem        1G-blocks Used Avail Capacity  Mounted on
tank/storage           190G   0G  190G       0%  /storage
tank/home               20G   0G   20G       0%  /usr/home
Notice that the home file system has a size of just 20GB now. At the same time the 10GB reservation on the home file system has cause the space available to the other file systems (just storage in my case) to decrease.

I hope this basic introduction has been helpful.
More to come in my FreeBSD » ZFS category.