This is the final part in the “So You Want to Learn ZFS” series. In Part One we discussed the reasons why you should be investigating ZFS if you are not already. Part Two highlighted some of the differences from a usability perspective of Solaris versus a more traditional Linux system. In Part Three we will be discussing what things you need to look for in a test ZFS system, as well as some hints on configuration of the basic system. I am assuming that you are going to use Solaris 11 Express.
Sun has a very complete Hardware Compatibility List published online. Please keep in mind that you should stick to this if you plan on putting the machine into production. If this is only for test, you can install and just see if it works. Some keys to look at:
- Memory – ZFS will use Memory as a cache device called an ARC when it is not otherwise spoken for. Additionally if you use de-duplication, the de-dupe tables must be in memory or on SSD in order to achieve optimal performance, the disk gets hit really hard on reads if it has to read that tables from spinning disk. I have tested ZFS with 4GB of RAM with no de-dupe, and where I have tested de-dupe I had 12GB of RAM available.
- Solid State Disks – If you are short on memory you can get a large amount of Solid State Disks for the price of a small amount of memory. For example with a quick search I am seeing some RAM kits (DDR3 1600 – PC3 12800) with cost per GB of $12.50-$13.75 but for a solid state drive (2.5″ MLC SATA2) the cost per GB is $1.95-$2.87. And with having SSDs it would have been nice to have the ability to test the ZIL as well.
- Spinning Disks – Since this will be for a test system you can “simulate” having multiple disks by using image files living on since disk to build complex disk pools. However if you can have at least 4 drives (1 for rpool and 3 for data) this will give you the most bang for your buck. Keep in mind that the disks do not need to be the same specifications as they would need to in a traditional RAID, they can be pulled out of other systems or out of a bin even. With physical disks you will want to simulate failures both by placing disks in an offline state and by physically removing the drive (I’d also recommend removing and formatting the drive on a different system – this gives you the opportunity to “replace” the failed drive, and see how that process works).
- Networking – I’d recommend at least 1Gbps Networking for your tests so that you can get reasonable network throughput over NFS, CIFS, and iSCSI. Also in production if you plan on using Aggregation groups it is critical that you can test this prior to purchasing “production-level” hardware, also you will need to make sure your network card and switch supports LACP if you are going to try and use aggregation groups.
- Specialized Hardware – If you plan on using any special hardware, such as Fibre Channel HBA’s so that you can serve out storage via Fibre Channel, you will need to make sure that you have this hardware available.
- Ultimately the biggest thing here is that you understand the differences in ZFS versions, and what features you have with each (as we discussed in part two).
- If you are intimidated by Solaris (as I was) then it is also good to consider a solution like NexentaStor. The web management interface abstracts it nicely so that you can focus on understanding the concepts before you need to understand the mechanics.
- Mirrored Root Pool – In test this is not critical, however if you have the disks you should practice this procedure – since it is most definitely critical in production. This procedure is incredibly important as this functionality is not built directly into the installer. Constantin Gonzalez has documented this procedure very well here. I have tested these instructions and they are top notch. I have not documented them myself as his documentation is really very good, and frankly I could not add anything of value to them.
- Networking Configuration – Depending on your environment you will need to configure networking using one or both of my articles. Solaris 11: Network Configuration Basic – documents Disable NWAM, Assign DHCP or Static IP addresses, and configure routing and name resolution. Solaris 11: Network Configuraiton Advanced – documents Tagging VLANs, Creating Aggregation Groups, and Enabling Jumbo Frames.
- Disable GDM – I don’t like having the GUI, unless I am running the system on a laptop.
# svcadm disable gdm
That ought to take care of that.
- Disable Keyboard Beep – OK the first time I installed Solaris 11 Express it was the middle of the night. Frankly I scared the piss out of myself. If you enter an invalid key (delete when you are at the beginning of a console line) it has the most obscene sound that was ridiculously loud and completely unaffected by the mute on my laptop or on the software volume controls. In GDM the volume control mixer > Preferences – Ensure that Keyboard Beep is checked. This will give you an options tab which contains the volume control for the keyboard beep. The absolute lowest volume was still loud enough to wake up the baby in the next room, so I muted it. There is a catch though. Every reboot it needs to be redone. I am not sure of a better way to do it. If you have one please comment or contact me using the form.
- Create ZFS Pools – To make a ZFS pool using physical disks simply use the below command:
# zpool create tank raidz c7t0d0 c7t1d0 c7t2d0
But if you don’t have extra physical disks we need to create some disk images first.
# mkfile 2g /root/vdisk01.img && mkfile 2g /root/vdisk02.img && mkfile 2g /root/vdisk03.img
Then simply reference the new disk images.
# zpool create tank raidz /root/vdisk01.img /root/vdisk02.img /root/vdisk03.img
This should give you a good head-start on your test ZFS deployment. I will additionally be documenting some of the features of ZFS in more specific articles.