The most critical thing about any storage implementation is flexibility. This means that you need to be able to adapt your solution to changing environmental considerations. The nice thing about a ZFS-based solution is that all of the building blocks are available. If you want to use it as a NAS device, you can use CIFS, NFS. If you want to provision Fibre Channel that is available too. Even some of the more sought after SAN features are included, replication, deduplication, thin-provisioning, and many more. Now in todays article we are going to discuss how you can go about setting up iSCSI targets. I personally am not a fan of iSCSI (I have been bitten by the hard knee too many times), however you really can’t argue with the flexibility that is provided by being able to quickly provision storage on already existing or readily available hardware. The best thing about this process is that it is really similar to provisioning fibre channel targets, so if you are familiar with this process (available here) then you will be able to quickly adapt to the iSCSI variation.
I did my testing on OpenIndiana 151a, which had up to date packages as of Feb 23, 2012. The process should be largely similar if not the same on Solaris 11 Express and Solaris 11 GA.
Install Prequisite Packages
# pkg install network/iscsi/target
Start SCSI Target Mode Framework Service
# svcadm enable svc:/system/stmf:default
Start iSCSI Target Service
# svcadm enable svc:/network/iscsi/target:default
Display the Status of the SCSI Target Service
The ALUA Status is what allows a LUN to be served via FC and iSCSI at the same time. We do not require or want it.
# stmfadm list-state Operational Status: online Config Status : initialized ALUA Status : disabled ALUA Node : 0
Create a File System (Volume) to be Provisioned
Before we can present a LUN then we need to create a ZFS file system as a volume so that it can be used as a block device.
# zfs create -V 10G rpool/testvol
Notice the lack of a mount point, this is due to provisioning the file system as a volume.
# zfs list rpool/testvol NAME USED AVAIL REFER MOUNTPOINT rpool/testvol 10.3G 436G 16K -
Create a LUN for the Volume
So we have created a volume, now we need to make the STMF aware of it, so that it can hand it out as a block device, and thus it can be used.
# sbdadm create-lu /dev/zvol/rdsk/rpool/testvol Created the following LU: GUID DATA SIZE SOURCE -------------------------------- ------------------- ---------------- 600144f0b7ea490000004f471b750001 10737418240 /dev/zvol/rdsk/rpool/testvol
We will need the path and the name of the LUN, which is available from the create-lu command (as GUID and SOURCE), however stmfadm gives us a more readable format.
# stmfadm list-lu -v LU Name: 600144F0B7EA490000004F471B750001 Operational Status: Online Provider Name : sbd Alias : /dev/zvol/rdsk/rpool/testvol View Entry Count : 0 Data File : /dev/zvol/rdsk/rpool/testvol Meta File : not set Size : 10737418240 Block Size : 512 Management URL : not set Vendor ID : OI Product ID : COMSTAR Serial Num : not set Write Protect : Disabled Writeback Cache : Enabled Access State : Active
Create an iSCSI Qualified Name (iqn) Target
This simply creates the target iqn for the machine we will be serving out iSCSI volumes from.
# itadm create-target Target iqn.2010-09.org.openindiana:02:1a7a530f-8508-4736-f269-d6363a8cb5e6 successfully created
Put the New Target in Offline Mode
In order to add the target to the target group we will be creating (to control storage views) then we must offline our target first.
# stmfadm offline-target iqn.2010-09.org.openindiana:02:1a7a530f-8508-4736-f269-d6363a8cb5e6
Create a Target Group
This the group which will control the target side of the storage connection, so in other words it is on your array, and I like to name it as such, so if your array were named “array001” then I would name your target group that.
# stmfadm create-tg array001
Add the Target to the Target Group
# stmfadm add-tg-member -g array001 iqn.2010-09.org.openindiana:02:1a7a530f-8508-4736-f269-d6363a8cb5e6
Review the Target Group Configuration
# stmfadm list-tg -v Target Group: array001 Member: iqn.2010-09.org.openindiana:02:1a7a530f-8508-4736-f269-d6363a8cb5e6
Create a Host Group
This is the group which will identify where the traffic is coming from, so for organization I like to have it named to reflect the server name of the initiator side of the connection. So if your server was “server001” then I would name your host group to match that, this will help keep things nice and tidy and make your views much more legible.
# stmfadm create-hg server001
Add the Host to the Host Group
Here you will need to get the iqn of the client side of the storage connector and insert it into this command.
# stmfadm add-hg-member -g server001 iqn.1986-03.com.sun:01:c0ca4ce904ff.4f45b957
Create a View
A view is what ties together everything, we take the target group, the host group, and the LUN and put them in a view. If the connection does not meet the first two criteria then it does not get to see the LUN. This is important in the event that you have multiple servers accessing your array (which you will). You don’t want to end up with multiple servers writing to the same LUNs at the same time without certain precautions being taken first, so we want to block that behavior.
# stmfadm add-view -t array001 -h server001 600144F0B7EA490000004F471B750001
Create a Target Portal Group
The Target Portal Group is what the initator actually connects to in order to find the storage. Here we need to use the storage IP on array001.
# itadm create-tpg array001portal 10.0.0.21:3260
Enable Static Mode Discovery
# devfsadm -i iscsi
At this point once you have configured your iSCSI initiator on the client side then you should be able to see your iSCSI block device.
That wraps it up. Put a neat little bow on it and send it on its way.
A few things to remember with iSCSI and your ZFS box. You will need a mirrored ZIL (mirrored because you don’t want to lose data) to counteract the performance penalties associated with synchronous writes. The most important thing is that iSCSI is a protocol that is suited towards flexibility but not performance. If you need performance you should be considering fibre channel, remember a good chunk of the cost comes from the fabric, if you already have the fabric for another SAN you can easily connect another array into the existing fabric, remember it is Storage Area NETWORK.