So snapshots are basically a point in time view of everything which exists on a file system at a given point in time. They allow you to build complex data protection schemes which can allow you to give more user-directed restore capabilities, as well as simply give you a good way to roll back complex changes without the time and hassle of restoring large amounts of data. Now before we go into how to use snapshots, it is probably very important to mention what snapshots are NOT.
Snapshots are not:
- Backups – They live on the same physical media as the original data therefore they are not considered backups since the loss of the original media would result in complete data loss if you only relied on snapshots as your backup mechanism.
- Magic – They do take up space, albeit a lot less space than a solution which runs a script to create tar.gz files which are full copies of the file system.
- Writeable – Snapshots are read-only, if you need a writeable file system you can convert a snapshot to a clone, which are writeable, clones are outside of the scope of this article, but if you are interested in clones please look for another article on the subject here.
View Existing Snapshots
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/solaris@install 32.8M - 3.34G -
The most basic snapshot can simply be created by specifying the below command.
# zfs snapshot tank/testdata/finance@today
Now obviously their are “administrative” problems with the above command (most notably the fact that we are naming it “today” after all it won’t be today’s snap if we wait until tomorrow). So it would be far more correct to name it by the actual date, however I can’t be bothered to remember the date, and frankly I don’t have to. The below variation will use the output of the date +%m%d%Y command to generate the name of the snapshot, this will give us the ability to have the snapshot be named “03152011” instead of “today”.
# zfs snapshot tank/testdata/finance@`date +%m%d%Y`
Remember to look at your snapshots to ensure they are being named as you expect.
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT tank/testdata/finance@03312011 0 - 94K - tank/testdata/finance@today 0 - 94K - rpool/ROOT/solaris@install 32.8M - 3.34G -
Rename a Snapshot
Now lets assume that above you created a snapshot without having it reference a specific date/time combination. Maybe it was something like tank/software@preupgrade, and you have decided that the name really didn’t fit any more. Perhaps you were doing multiple upgrade and people were getting confused between two different upgrades (say the 4.0 and the 4.1 upgrade of some hypothetical software/system) so now you have the 4.0 snapshot which is labeled tank/software@preupgrade and the 4.1 upgrade is labeled as tank/software@newupgrade. It would make logical sense to rename them to perhaps email@example.com and firstname.lastname@example.org.
# zfs rename tank/software@preupgrade email@example.com # zfs rename tank/software@newupgrade firstname.lastname@example.org
Well that gives you the ability to rename your snapshots, so you can easily handle the inevitable name changes.
Delete a Snapshot
OK so following along with our previous analogy of a hypothetical software upgrade. We have now decided that the 4.0 upgrade project was a success and as such we no longer need the ability to rollback to email@example.com so it is time to delete it.
# zfs destroy firstname.lastname@example.org
Using Snapshot Holds
Now while you did your previous destruction of the email@example.com someone from management was watching over your shoulder and saw how “easy” it was to delete a snapshot. As such now you have to prevent an accidental deletion of the firstname.lastname@example.org snapshot. Also if you insert a “-r” as an option it will be recursive to child objects as well.
# zfs hold keep email@example.com
Now we can view our holds.
# zfs holds firstname.lastname@example.org NAME TAG TIMESTAMP email@example.com keep Thu Mar 15 09:02:22 2011
Of course we need a way to destroy held snapshots, can’t have the system going all “War Games” on us and deciding for us what can and cannot be deleted. The below command will remove the hold. You can now manually delete the snapshot, additionally you can also use “-d” option with the destroy parameter to schedule the deletion of the snapshot immediately upon the release of the snapshot hold.
# zfs release keep firstname.lastname@example.org
Rollback to a Snapshot
Well something always happens, those top-notch contractors that were brought in to manage the 4.1 upgrade have completely botched the job, and you need to get your working production 4.0 platform back up and running.
# zfs rollback email@example.com
It is important to note that if you are going to rollback “past” other snapshots that they will need to be destroyed first, or you can use the “-r” option to force the rollback “through” the other snapshots, until you get to the one you requested.
If you are using Gnome then you can enable Time Slider to enable the GUI management of automatic snapshots.
First lets examine the services that we are looking at.
# svcs -a | grep time-slider disabled 1:23:00 svc:/application/time-slider:default disabled 1:23:00 svc:/application/time-slider/plugin:rsync disabled 1:23:00 svc:/application/time-slider/plugin:zfs-send # svcs -a | grep auto-snapshot disabled 1:23:00 svc:/system/filesystem/zfs/auto-snapshot:daily disabled 1:23:00 svc:/system/filesystem/zfs/auto-snapshot:hourly disabled 1:23:00 svc:/system/filesystem/zfs/auto-snapshot:monthly disabled 1:23:00 svc:/system/filesystem/zfs/auto-snapshot:weekly disabled 4:09:14 svc:/system/filesystem/zfs/auto-snapshot:frequent
Now here we are going to enable the time-slider default which will allow us to enable auto-snapshots. Additionally you will see that I have also enabled frequent snapshots. The frequent schedule will perform a snapshot every 15 minutes and keep 4 snapshots. This schedule is ideal for test since you don’t want to wait a week to get a snapshot and verify your configuration, however in production this could be problematic depending on your dataset. Below I have listed your options for schedules.
- Frequent -> 15 minutes retain 4 snapshots
- Hourly -> every hour retain 24 snapshots
- Daily -> every day retain 31 snapshots
- Weekly -> every week retain 7 snapshots
- Monthly -> every month retain 12
# svcadm enable svc:/application/time-slider:default # svcadm enable svc:/system/filesystem/zfs/auto-snapshot:frequent
Now we need to take a look at our file systems and see what we’d like to snapshot.
# zfs get -r com.sun:auto-snapshot rpool NAME PROPERTY VALUE SOURCE rpool com.sun:auto-snapshot - - rpool/ROOT com.sun:auto-snapshot - - rpool/ROOT/solaris com.sun:auto-snapshot - - rpool/ROOT/solaris@install com.sun:auto-snapshot - - rpool/dump com.sun:auto-snapshot false local rpool/export com.sun:auto-snapshot - - rpool/export/home com.sun:auto-snapshot - - rpool/export/home/matthew com.sun:auto-snapshot - - rpool/swap com.sun:auto-snapshot false local
Now assuming we wanted to have auto-snapshots on every file system below rpool/export then you would want to do the below. The -r option makes it recurse through sub-file systems.
# zfs set -r com.sun:auto-snapshot=true rpool/export # zfs get -r com.sun:auto-snapshot rpool NAME PROPERTY VALUE SOURCE rpool com.sun:auto-snapshot - - rpool/ROOT com.sun:auto-snapshot - - rpool/ROOT/solaris com.sun:auto-snapshot - - rpool/ROOT/solaris@install com.sun:auto-snapshot - - rpool/dump com.sun:auto-snapshot false local rpool/export com.sun:auto-snapshot true local rpool/export/home com.sun:auto-snapshot true inherited from rpool/export rpool/export/home/matthew com.sun:auto-snapshot true inherited from rpool/export rpool/swap com.sun:auto-snapshot false local
Restart Time-Slider to create your first snapshots.
# svcadm restart svc:/application/time-slider:default
View your newly created snapshots.
# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/solaris@install 32.9M - 3.34G - rpool/export@zfs-auto-snap_monthly-2011-04-05-07h58 0 - 8.00G - rpool/export/home@zfs-auto-snap_monthly-2011-04-05-07h58 0 - 32K - rpool/export/home/matthew@zfs-auto-snap_monthly-2011-04-05-07h58 0 - 2.07G -
Now keep in mind that in my example we have only enabled the frequent schedule, if you want to enable a different schedule just adjust the name of the auto-snap service to use what you’d like.
Monitoring Disk Space
Now if you plan on having any sort of storage, your first question should always be how can I monitor the utilization, it almost never is however for this moment we can live in happy-ville (or since I am writing this article while I am in Germany happy-burg) where everything just works, users manage their own stale data, and nothing ever breaks. However in the real world this stuff never happens so you have to be able to pro-actively manage your data. Frankly snapshots make managing your data (read: not filling up your disks) harder. Which ultimately should be expected whenever you have any sort of automated process which will generate any sort of duplicate data, keep in mind that ZFS does this very well and as long as you are not generating a ton of changes then snapshots will be fantastic. On the flip side of this coin there is some data that is just not good for snapshots, the most obvious being block devices. So the long and short of it is that when you first start doing snapshots you have no idea how much data they will take up over the long term. Once you have an idea for this you will be able to plan appropriately for your disk usage, although really you will need to monitor this throughout the life of your system.
Now to see what a file system is taking using including all children.
# zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD rpool 267G 25.9G 20K 94K 0 25.9G rpool/export 267G 10.1G 0 8.00G 0 2.07G rpool/export/home 267G 2.07G 0 32K 0 2.07G rpool/export/home/matthew 267G 2.07G 0 2.07G 0 0
Above you will see a trimmed listing of my test machine. Notice that we have a couple of different points of interest in this particular command. We have a USED, USEDSNAP, and USEDCHILD columns, lets define these.
- USED – This is the total allocation for the file system + all snapshots + all child file systems + all child file systems snapshots. Get it? Basically this is what is actually being used. We can see in the above example that rpool is using 25.9GB total out of 267GB available and that 10.1GB is actually coming from rpool/export, now we can also see that if we follow the rabbit hole that rpool/export/home/matthew is contributing 2.07GB to the 10.1GB of rpool/export.
- USEDSNAP – This is the total allocation of this file systems snapshots, this doesn’t show you how much data the file system is taking up nor the amount of data is in child systems. This basically will indicate if you are low on space due to overzealous snapshots or simply a lack of data management by the users. In the above example I only have the install snapshot which the system does in this tree, which is fine just see that the rpool SNAPUSED value is 20K so clearly if we were out of space, snapshots would not be the problem in this tree.
- USEDCHILD – This is the total usage of storage of child file systems. As you can see above rpool/export and rpool/export/home both have a value of 2.07GB, this is because the rpool/export/home doesn’t actually add any data, it just passes up the data that exists below in rpool/export/home/matthew (where the 2.07GB actually exists).
Snapshots can make your life far easier, however they can be hard to manage if you are not careful and take the time to learn what you are doing. Spend a lot of time in test to really understand the functionality that is introduced by snapshots.