Equallogic: Remove Volume Snapshot Reserve

Equallogic: Remove Volume Snapshot Reserve

So if you purchased  a Equallogic and put it into production one of the first things that you notice is that your xTB didn’t go as far as you originally planned, this is because Equallogic by default comes with a 100% snapshot reserve on all volumes that you create.  This means that if you allocate a 1TB LUN, then the system provisions 1TB of LUN space and an additional 1TB of snapshot reserve for this LUN.  This is neither good nor bad.  However this is not always necessary.  If you plan on making little use of snapshots you may want to decrease this amount of reserve or in my case remove it entirely (as we don’t plan on using it at all).

Examine the Pool and the Existing Free Space and Capacity

Notice that we get two different results with regard to FreeSpace.  This is because the “pool show” command will deduct the snap-reserves of all allocated volumes and thus show us the actual usable space for provisioning a new volume.  Alternatively “member show -poolinfo” will leave the snapshot reserve space unless it is actually written, so basically we have our 363GB + 2TB in reserves – actual space used by snapshots (in this case 551GB).

SANGroup00> pool show
Name                 Default Members Volumes Capacity   FreeSpace
-------------------- ------- ------- ------- ---------- ----------
default              true    1       2       4.35TB     363.22GB

SANGroup00> member show -poolinfo
Name       Status  Version    Disks Capacity   FreeSpace  Connections Pool
---------- ------- ---------- ----- ---------- ---------- ----------- -------
EQL00      online  V5.1.1 (R1 12    4.35TB     1.89TB     24          default
85010)     

Show Existing Configuration of the First Volume

Now in the below output we are looking at the configuration of Volume00.  We want to notice, the snap-reserve, snap-reserve-available, and snapshots field.  You will see that in my example we have a 100% reserve (which is default) and that we have 54% of that snap-reserve available.  This means that we have snapshots in the snapshots field, and that we are using some of that space (which means part of it is no longer “reserved”).

SANGroup00> volume select Volume00
SANGroup00(volume_Volume00)> show
_____________________________ Volume Information ______________________________
Name: Volume00                 Size: 1TB
VolReserve: 1TB                        VolReserveInUse: 492.7GB
ReplReserveInUse: 0MB                  iSCSI Alias: Volume00
iSCSI Name:                            ActualMembers: 1
iqn.2001-05.com.equallogic:8-cb2b76- Snap-Warn: 10%
a51cd9059-188000000244f04f-volume00  Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%                     Snap-Reserve-Avail: 54% (551.86GB)
Permission: read-write                 DesiredStatus: online
Status: online                         Connections: 8
Snapshots: 1                           Bind:
Type: not-replicated                   ReplicationReserveSpace: 0MB
Replicas: 0                            ReplicationPartner:
Pool: default                          Transmitted-Data: 1011.65GB
Received-Data: 3.15TB                  Pref-Raid-Policy: none
Pref-Raid-Policy-Status: none          Thin-Provision: disabled
Thin-Min-Reserve: 0% (0MB)             Thin-Growth-Warn: 0% (0MB)
Thin-Growth-Max: 0% (0MB)              ReplicationTxData: 0MB
MultiHostAccess: enabled               iSNS-Discovery: disabled
Replica-Volume-Reserve: 0MB            Thin-Clone: N
Template: N                            NAS File System: N
Administrator:                         Thin-Warn-Mode: offline
_______________________________________________________________________________

Remove the Snapshot from the First Volume

The snapshot we have will no longer be needed so we are getting rid of that now.

SANGroup00(volume_Volume00)> snapshot show
Name                        Permission Status      Schedule Connections
--------------------------- ---------- ----------- -------- -----------
Volume00-2012-01-04 read-write offline              0
-20:20:43.3.1

SANGroup00(volume_Volume00)> snapshot delete Volume00-2012-01-04-20:20:43.3.1
Do you want to delete the snapshot? (y/n) [n]y

Snapshot deletion succeeded.

Modify the Existing Snap-Reserve of the First Volume

I am removing the snap-reserve in its entirety.  If you simply want to decrease the space available for snapshots use an appropriate percentage.

SANGroup00(volume_Volume00)> snap-reserve 0%

Review Configuration of First Volume

Make sure everything looks right before moving on.

SANGroup00(volume_Volume00)> show
_____________________________ Volume Information ______________________________
Name: Volume00                 Size: 1TB
VolReserve: 1TB                        VolReserveInUse: 492.77GB
ReplReserveInUse: 0MB                  iSCSI Alias: Volume00
iSCSI Name:                            ActualMembers: 1
iqn.2001-05.com.equallogic:8-cb2b76- Snap-Warn: 10%
a51cd9059-188000000244f04f-volume00  Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 0%                       Snap-Reserve-Avail: 0% (0MB)
Permission: read-write                 DesiredStatus: online
Status: online                         Connections: 8
Snapshots: 0                           Bind:
Type: not-replicated                   ReplicationReserveSpace: 0MB
Replicas: 0                            ReplicationPartner:
Pool: default                          Transmitted-Data: 1015.44GB
Received-Data: 3.15TB                  Pref-Raid-Policy: none
Pref-Raid-Policy-Status: none          Thin-Provision: disabled
Thin-Min-Reserve: 0% (0MB)             Thin-Growth-Warn: 0% (0MB)
Thin-Growth-Max: 0% (0MB)              ReplicationTxData: 0MB
MultiHostAccess: enabled               iSNS-Discovery: disabled
Replica-Volume-Reserve: 0MB            Thin-Clone: N
Template: N                            NAS File System: N
Administrator:                         Thin-Warn-Mode: offline
_______________________________________________________________________________

Show Existing Configuration for the Second Volume

Check our snap-reserve, snap-reserve-avail, and snapshots.

SANGroup00> volume select Volume01
SANGroup00(volume_Volume01)> show
_____________________________ Volume Information ______________________________
Name: Volume01                        Size: 1TB
VolReserve: 1TB                        VolReserveInUse: 875.07GB
ReplReserveInUse: 0MB                  iSCSI Alias: TestDS
iSCSI Name:                            ActualMembers: 1
iqn.2001-05.com.equallogic:8-cb2b76- Snap-Warn: 10%
8cecd9059-2b20000000b4ec1a-volume01  Snap-Depletion: delete-oldest
Description:                           Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (1TB)         Permission: read-write
DesiredStatus: online                  Status: online
Connections: 8                         Snapshots: 0
Bind:                                  Type: not-replicated
ReplicationReserveSpace: 0MB           Replicas: 0
ReplicationPartner:                    Pool: default
Transmitted-Data: 5.46TB               Received-Data: 7.77TB
Pref-Raid-Policy: none                 Pref-Raid-Policy-Status: none
Thin-Provision: disabled               Thin-Min-Reserve: 0% (0MB)
Thin-Growth-Warn: 0% (0MB)             Thin-Growth-Max: 0% (0MB)
ReplicationTxData: 0MB                 MultiHostAccess: enabled
iSNS-Discovery: enabled                Replica-Volume-Reserve: 0MB
Thin-Clone: N                          Template: N
NAS File System: N                     Administrator:
Thin-Warn-Mode: offline
_______________________________________________________________________________

Modify the Existing Snap-Reserve for Second Volume

Again simply removing the snap-reserve.

SANGroup00(volume_Volume01)> snap-reserve 0%

Review Configuration of Second Volume

Sanity check our changes to the second volume.

SANGroup00(volume_Volume01)> show
_____________________________ Volume Information ______________________________
Name: Volume01                        Size: 1TB
VolReserve: 1TB                        VolReserveInUse: 875.07GB
ReplReserveInUse: 0MB                  iSCSI Alias: TestDS
iSCSI Name:                            ActualMembers: 1
iqn.2001-05.com.equallogic:8-cb2b76- Snap-Warn: 10%
8cecd9059-2b20000000b4ec1a-volume01  Snap-Depletion: delete-oldest
Description:                           Snap-Reserve: 0%
Snap-Reserve-Avail: 0% (0MB)           Permission: read-write
DesiredStatus: online                  Status: online
Connections: 8                         Snapshots: 0
Bind:                                  Type: not-replicated
ReplicationReserveSpace: 0MB           Replicas: 0
ReplicationPartner:                    Pool: default
Transmitted-Data: 5.46TB               Received-Data: 7.77TB
Pref-Raid-Policy: none                 Pref-Raid-Policy-Status: none
Thin-Provision: disabled               Thin-Min-Reserve: 0% (0MB)
Thin-Growth-Warn: 0% (0MB)             Thin-Growth-Max: 0% (0MB)
ReplicationTxData: 0MB                 MultiHostAccess: enabled
iSNS-Discovery: enabled                Replica-Volume-Reserve: 0MB
Thin-Clone: N                          Template: N
NAS File System: N                     Administrator:
Thin-Warn-Mode: offline
_______________________________________________________________________________

Examine the Pool and the New Free Space and Capacity

Now we should notice a significantly different FreeSpace indications on both commands, and they should match if you disabled all snap-reserves on all volumes.

SANGroup00> pool show
Name                 Default Members Volumes Capacity   FreeSpace
-------------------- ------- ------- ------- ---------- ----------
default              true    1       2       4.35TB     2.35TB

SANGroup00> member show -poolinfo
Name       Status  Version    Disks Capacity   FreeSpace  Connections Pool
---------- ------- ---------- ----- ---------- ---------- ----------- -------
EQL00      online  V5.1.1 (R1 12    4.35TB     2.35TB     24          default
85010)  

Configure the Default Snap-Reserve

Of course if you don’t already have a bunch of volumes on which to make this change you can always just change the default behavior.  Then all subsequently created volumes will have a snap-reserve of whatever you have picked.

SANGroup00> grpparams def-snap-reserve 0%