Part 6 of a series that describes the key features of ZFS in Oracle Solaris 11.1 and provides step-by-step procedures explaining how to use them. This article focuses on how to use ZFS snapshots to create a read-only copy of a file system and then transfer the snapshot stream from one system to another system.
I've been working with backup software for the last five years and, as everybody knows, creating backups provides a fundamental security baseline for any environment. However, in contrast to what everyone thinks, backup concepts can be difficult to learn and use.
Note: Creating copies of your data using ZFS snapshots is simple and easy. If you need per-file restoration, backup media verification, and media management capability, consider using an enterprise backup solution.
Luckily, Oracle Solaris 11 (and Oracle Solaris 10 releases) provides some simple but very useful commands to help us keep copies of our data: zfs send
and zfs recv
. Additionally, zfs send
and zfs recv
don't work alone because all their operations are based on ZFS snapshot streams that can be received to re-create a file system.
For anyone who isn't familiar with the snapshot concept, a snapshot is a "picture" of a file system. Usually snapshots are read-only, and this feature permits us to create a point-in-time copy of our file system.
Furthermore, snapshots take only a small amount of space initially; they start to consume space only after files and directories are deleted or changed from the original file system. This is due to the copy-on-write (COW) concept, which means that initially a snapshot keeps a set of pointers to the original files and directories (for example, f1
, f2
, f3
, and f4
), and as the original file system is changed (for example, f1'
, f2'
, f3'
, and f4'
), files must be saved to keep the same status that existed when the snapshot was created (f1
, f2
, f3
, and f4
).
It's essential to highlight that when using snapshot streams, the whole file system is saved and it isn't possible to exclude any individual directory from the stream or even to choose any individual file or directory for restoration.
Creating a snapshot is simple. First, we create a new pool, then we create a file system in this pool, and then we populate this file system with some files:
root@solaris11-1:~# zpool create snap_pool c8t3d0
root@solaris11-1:~# zfs create snap_pool/fs_1
root@solaris11-1:~# cp /etc/[a-m]* / snap_pool/fs_1
root@solaris11-1:~# zfs list -r snap_pool
NAME USED AVAIL REFER MOUNTPOINT
snap_pool 84.3M 78.2G 32K /snap_pool
snap_pool/fs_1 84.2M 78.2G 84.2M /snap_pool/fs_1
Now, it's time to create the snapshot using the zfs snapshot
command:
root@solaris11-1:~# zfs snapshot snap_pool/fs_1@snap1
root@solaris11-1:~# zfs list -r snap_pool
NAME USED AVAIL REFER MOUNTPOINT
snap_pool 80.8M 78.2G 32K /snap_pool
snap_pool/fs_1 80.7M 78.2G 80.7M /snap_pool/fs_1
Hmm...unfortunately, snapshots aren't listed by default, but we can enable this feature through a property:
root@solaris11-1:~# zpool listsnapshots=on snap_pool
root@solaris11-1:~# zfs list -r snap_pool
NAME USED AVAIL REFER MOUNTPOINT
snap_pool 80.8M 78.2G 32K /snap_pool
snap_pool/fs_1 80.7M 78.2G 80.7M /snap_pool/fs_1
snap_pool/fs_1@snap1 0 - 80.7M -
To prove that snapshots are cool, some files are removed (those that begin with "i") and the snapshot is roll backed to show that all files are restored:
root@solaris11-1:~# cd /snap_pool/fs_1/
root@solaris11-1:/snap_pool/fs_1# ls -al [f-j]*
-rw-r--r-- 1 root root 6967 Dec 9 21:56 format.dat
-rw-r--r-- 1 root root 209 Dec 9 21:56 ftpusers
-rw-r--r-- 1 root root 10834 Dec 9 21:56 gnome-vfs-mime-magic
-rw-r--r-- 1 root root 420 Dec 9 21:56 group
-rw-r--r-- 1 root root 393 Dec 9 21:56 hba.conf
-rw-r--r-- 1 root root 27 Dec 9 21:56 hostid
-rw-r--r-- 1 root root 357 Dec 9 21:56 hosts
-rw-r--r-- 1 root root 394 Dec 9 22:06 ima.conf
-rw-r--r-- 1 root root 812 Dec 9 22:06 inetd.conf
-rw-r--r-- 1 root root 955 Dec 9 22:06 inittab
-rw-r--r-- 1 root root 39 Dec 9 22:06 ioctl.syscon
-rw-r--r-- 1 root root 596 Dec 9 22:06 iu.ap
root@solaris11-1:/snap_pool/fs_1# hrm i*
root@solaris11-1:/snap_pool/fs_1# ls -al [f-j]*
-rw-r--r-- 1 root root 6967 Dec 9 21:56 format.dat
-rw-r--r-- 1 root root 209 Dec 9 21:56 ftpusers
-rw-r--r-- 1 root root 10834 Dec 9 21:56 gnome-vfs-mime-magic
-rw-r--r-- 1 root root 420 Dec 9 21:56 group
-rw-r--r-- 1 root root 393 Dec 9 21:56 hba.conf
-rw-r--r-- 1 root root 27 Dec 9 21:56 hostid
-rw-r--r-- 1 root root 357 Dec 9 21:56 hosts
root@solaris11-1:/snap_pool/fs_1# cd
root@solaris11-1:~# zfs rollback snap_pool/fs_1@snap1
root@solaris11-1:~# cd /snap_pool/fs_1/
root@solaris11-1:/snap_pool/fs_1# ls -al [f-j]*
-rw-r--r-- 1 root root 6967 Dec 9 21:56 format.dat
-rw-r--r-- 1 root root 209 Dec 9 21:56 ftpusers
-rw-r--r-- 1 root root 10834 Dec 9 21:56 gnome-vfs-mime-magic
-rw-r--r-- 1 root root 420 Dec 9 21:56 group
-rw-r--r-- 1 root root 393 Dec 9 21:56 hba.conf
-rw-r--r-- 1 root root 27 Dec 9 21:56 hostid
-rw-r--r-- 1 root root 357 Dec 9 21:56 hosts
-rw-r--r-- 1 root root 394 Dec 9 22:06 ima.conf
-rw-r--r-- 1 root root 812 Dec 9 22:06 inetd.conf
-rw-r--r-- 1 root root 955 Dec 9 22:06 inittab
-rw-r--r-- 1 root root 39 Dec 9 22:06 ioctl.syscon
-rw-r--r-- 1 root root 596 Dec 9 22:06 iu.ap
Before sending the snapshot stream to our second Oracle Solaris 11 host (solaris11-2
), a pool named backup_pool
must be created on the second host. Then the ZFS send stream process can be started:
root@solaris11-1:~# ssh solaris11-2
Password:
Last login: Mon Dec 9 18:42:02 2013
Oracle Corporation SunOS 5.11 11.1 September 2012
root@solaris11-2:~# zpool create backup_pool c8t3d0
root@solaris11-2:~# zpool list backup_pool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
backup_pool 3.97G 85K 3.97G 0% 1.00x ONLINE -
root@solaris11-1:~# zfs send snap_pool/fs_1@snap1 | ssh solaris11-2 zfs recv -F backup_pool/fs_1_backup
Password:
root@solaris11-2:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
backup_pool 3.97G 80.8M 3.89G 1% 1.00x ONLINE -
repo_pool 15.9G 7.64G 8.24G 48% 1.00x ONLINE -
rpool 79.5G 28.4G 51.1G 35% 1.00x ONLINE -
softtoken_pool_2 3.97G 193K 3.97G 0% 1.00x ONLINE -
softtooken_pool 3.97G 194K 3.97G 0% 1.00x ONLINE -
solaris11-2-pool 3.97G 540M 3.44G 13% 1.00x ONLINE -
root@solaris11-2:~# zfs list -r backup_pool
NAME USED AVAIL REFER MOUNTPOINT
backup_pool 80.8M 3.83G 32K /backup_pool
backup_pool/fs_1_backup 80.7M 3.83G 80.7M /backup_pool/fs_1_backup
root@solaris11-2:~# cd /backup_pool/fs_1_backup
root@solaris11-2:/backup_pool/fs_1_backup# ls -l
total 165286
-rw-r--r-- 1 root root 1436 Dec 9 21:56 aliases
-rw-r--r-- 1 root root 182 Dec 9 21:56 auto_home
-rw-r--r-- 1 root root 220 Dec 9 21:56 auto_master
-rw------- 1 root root 84034034 Dec 9 21:53 core
-rw-r--r-- 1 root root 1931 Dec 9 21:56 dacf.conf
-r--r--r-- 1 root root 516 Dec 9 21:56 datemsk
...
Once again, ZFS is great. A backup of file system snap_pool/fs_1
was done using its snapshot (snap_pool/fs_1@snap1
) and sent to the second host (solaris11-2
) into /backup_pool
. The same files that exist on the first host now also exist on the second Oracle Solaris 11 host.
Testing the ZFS receive functionality is done almost the same way. To illustrate, on the first host (solaris11-1
), some files are removed and the existing snapshot of snap_pool/fs_1
is destroyed:
root@solaris11-1:~# rm /snap_pool/fs_1/[d-j]*
root@solaris11-1:~# zfs list -r snap_pool
NAME USED AVAIL REFER MOUNTPOINT
snap_pool 162M 78.1G 34K /snap_pool
snap_pool@snap1 80.7M - 80.7M -
snap_pool/fs_1 80.7M 78.1G 80.6M /snap_pool/fs_1
snap_pool/fs_1@snap1 114K - 80.7M -
root@solaris11-1:~# zfs destroy snap_pool/fs_1@snap1
Then, on the second host (solaris11-2
), we execute the receive procedure:
root@solaris11-2:~# zpool set listsnapshots=on backup_pool
root@solaris11-2:~# zfs list -r backup_pool
NAME USED AVAIL REFER MOUNTPOINT
backup_pool 80.8M 3.83G 32K /backup_pool
backup_pool/fs_1_backup 80.7M 3.83G 80.7M /backup_pool/fs_1_backup
backup_pool/fs_1_backup@snap1 0 - 80.7M -
root@solaris11-2:~# zfs send -Rv backup_pool/fs_1_backup@snap1 | ssh solaris11-1 zfs recv -F snap_pool/fs_1
sending from @ to backup_pool/fs_1_backup@snap1
Password:
root@solaris11-2:~#
Going back to first machine, let's list the files in the snap_pool/fs_1
file system:
root@solaris11-1:~# ls -al /snap_pool/fs_1/
total 165288
drwxr-xr-x 2 root root 40 Dec 9 22:06 .
drwxr-xr-x 3 root root 3 Dec 10 01:08 ..
-rw-r--r-- 1 root root 1436 Dec 9 21:56 aliases
-rw-r--r-- 1 root root 182 Dec 9 21:56 auto_home
-rw-r--r-- 1 root root 220 Dec 9 21:56 auto_master
-rw------- 1 root root 84034034 Dec 9 21:53 core
-rw-r--r-- 1 root root 1931 Dec 9 21:56 dacf.conf
-r--r--r-- 1 root root 516 Dec 9 21:56 datemsk
-r-------- 1 root root 5900 Dec 9 21:53 delegated_zone.xml
...
Fantastic. The files are back in the snap_pool/fs_1
file system.
Here are some links to other things I've written:
And here are some Oracle Solaris 11 resources:
Alexandre Borges is an Oracle ACE and who worked as an employee and contracted instructor at Sun Microsystems from 2001 to 2010 teaching Oracle Solaris, Oracle Solaris Cluster, Oracle Solaris security, Java EE, Sun hardware, and MySQL courses. Nowadays, he teaches classes for Symantec, Oracle partners, Hitachi, and EC-Council, and he teaches several very specialized classes about information security. In addition, he is a regular writer and columnist at Linux Magazine Brazil.
Revision 1.0, 05/01/2014