Adventures in ZFS: Storage Sizing

Adventures in ZFS: Storage Sizing

One of the most complex parts of storage in general and ZFS in particular is correctly assessing the amount and types of storage that you will need to meet your requirements.  You can of course just purchase what you can afford and hope for the best.  But after reading this article you can relatively easily determine approximately how much storage you should plan for based on a few factors from your existing environment, this article will primarily focus on ZFS, however alot of this is true in other storage platforms as well…

Factor One – Use Case

The use case is the single most important factor, this basically tells you how you plan on using the data.  Is this going to serve files (NFS or CIFS), will you be using it to serve block devices (Fibre Channel or iSCSI), or perhaps you will be using this as a consolidation point of multiple sources of data for use in a backup scenario.  The answer to this question will primarily determine if you are using a ZIL or Cache device or if you are using spares.  This of course will steal slots (where disks could go) so you will want to factor that into your larger capacity planning.

Physical Capacity

Obviously the machine that you buy will have a limit to how many disks you can fit in it.  After that you can plan on adding expansion chassis.  You need to make sure you understand how your zpools will be formatted and the types of disks that you will be using so that you can begin to see how much data you will be able to squeeze onto the spindles.  One easy thing to overlook is the need for drive bays for other purposes (spares, log, cache, system drives, etc).  If you plan on expanding with an additional chassis, will you use up all of your expansion slots with various cards (10Gb, FibreChannel, NICs, SSD, etc).  Also when considering disks you need to keep in mind that capacity is not the only metric to consider.  You also need to keep in mind that your workload will determine how performant the solution must be to meet requirements, for example a backup server will not require the same performance as a storage head which is handing out block devices which have Operating Systems installed on them (read: Virtual Machine environments).

Spare Devices

Depending on your use case you might need to consider allocating some drives as spares to allow you the time to procure and/or replace failed drives.  This will simply protect you from additional failure(s).  If you have the space I like to have 1 spare per 12-15 acting as a spare.  This really straight forward, just make sure that as part of any large system that the company accepts the risks involved with your decision, if they are not willing to accept the risks then provide them with an option which will protect them from that risk.  They will then either pay for the lower risk system or accept the risk of the cheaper system.

Cache Devices

Cache devices (L2ARC) is a really quick and cheap way to increase your read iops on a system wide basis.  Basically as data is read off of ZFS it is read through the cache device, then when subsequent read requests for the data come in then they can be served directly off of the cache, this works in combination with the ARC which is where all of your memory goes in ZFS.  Now a cache device is really necessary if you are using deduplication.  When you enable deduplication all subsequently written files are indexed into the dedup table, before a write can be committed it must be compared to items that are already in the dedup table (to see if anything else is the same – thus if it should actually be written to disk or if a pointer to the previously existing file will exist) initially this will not pose a problem as your dedupe table will be relatively small and it can be stored in the 25% of the ARC that it is relegated to.  However at some point you will run out of space in the ARC and if you do not have a L2ARC then your dedup table will be swapped which of course means that it will be holding up writes to disk, in other words it will be slow.  Since a cache device’s purpose is to speed up reads, then you want to look for a SSD which is slanted towards better performance on the read side.  I like Crucial M4 for this purpose, but please keep in mind they had a serious firmware bug in a previous version so make sure you end up with v0309 on the disk and update to it if you don’t.

Log Devices

Log devices (ZFS Intent Log) is a really cool way to speed up technologies which utilize synchronous writes.  Synchronous writes are used to ensure that data is committed to the disk before additional data is sent, it essentially serializes the connection.  Most commonly you will find synchronous writes in databases, NFS and iSCSI.  So if you are planning on using one of these technologies then you will want to consider using a mirrored ZIL device.  Now I am sure someone out there is saying, “Whoa mirrored?  But that takes up two slots man…” and you would be correct, however keep in mind that if you have a failure of your ZIL, you have actual data committed to it which has not been committed to disk (read: data loss).  As such you really ought to have a mirrored ZIL or none at all, you can also stack multiple mirrors as a ZIL if you had a higher synchronous writes workload.  Since log devices are meant to improve writes (albeit a certain type of write) then you will want to look for a SSD which is slanted towards better write performance.  I tend to like OCZ Vertex 3 for this purpose.

Factor Two – Data Growth

Now we get into actually sizing data.  When you try to project growth you need to make sure that (1) you take use an appropriately sized sample set of data (2) factor in any relevant business or technical factors which could have skewed your sample.  I like to size my storage based off of backup size.  Simply grab the size of your full backups for the data in question, if you are going to use this to serve block devices collecting this data becomes a bit more complex, because now you are talking more of a machine sprawl type of growth which can be harder to account for.  Here is are a few formulas to get you started (make sure you are using the same metric, i.e. B, KB, MB, GB, TB).

Calculate Weekly Growth

(full_week1 - full_week0) = growth_week1
(full_week2 - full_week1) = growth_week2

Calculate Weekly Growth Rate

(growth_week1 / full_week0) = growth_rate_week1

Average Weekly Growth

(growth_week1 + growth_week2 + growth_week3 + growth_week4) / number_of_weeks = average_weekly_growth

Average Weekly Growth Rate

(growth_rate_week1 + growth_rate_week2 + growth_rate_week3 + growth_rate_week4) / number_of_weeks = average_weekly_growth_rate

Annualized Weekly Growth

(average_weekly_growth * 53) = annual_growth

So for example if we start with this example…

week0 100
week1 105
week2 120
week3 122
week4 122
week5 118 data expired
week6 130 added new servers
week7 135
week8 136

Notice that in this example we have a couple of points where I saw it fit to make notes.  One was when our dataset decreased and the other was when it grew sharper than the trend up to that point.  I made these notes because I wanted to point out that if you had a very large deviation then you might want to make some sort of adjustment so as to not skew your results.  In this case both of these happened because of regular course of business so they should be legitimately considered as part of the growth curve.  However if you had some sort of failure in your backups that you were already accounting for that data in another way you would not want to double count it if it came back into the backups in the middle of your sample.

Weekly Growth

week1 (105 - 100) = 5
week2 (120 - 105) = 15
week3 (122 - 120) = 2
week4 (122 - 122) = 0
week5 (118 - 122) = -4
week6 (130 - 118) = 12
week7 (135 - 130) = 5
week8 (136 - 135) = 1

Weekly Growth Rate

week1 (5 / 100) = .05 = 5%
week2 (15 / 105) = .143 = 14.3%
week3 (2 / 120) = .017 = 1.7%
week4 (0 / 122) = 0 = 0%
week5 (-4 / 122) = -.033 = -3.3%
week6 (12 / 118) = .102 = 10.2%
week7 (5 / 130) = .039 = 3.9%
week8 (1 / 135) = .007 = .7%

Average Weekly Growth

(5 + 15 + 2 + 0 + -4 + 12 + 5 + 1) / 8 = 4.5

Average Weekly Growth Rate

(5 + 14.3 + 1.7 + 0 + -3.3 + 10.2 + 3.9 + .7) / 8 = 4.1%

Annualized Growth

(4.5 * 53) = 238.5

Now as you can see from this example the numbers add up quickly, in a year we have easily doubled our dataset, now why is this important?  Basically most businesses perform budgeting at the beginning of the year and as such expenses need to be planned out, and any expansion that you do perform will need to _at least_ last through the year, either way you at least need to know how long you can reasonably expect to be able to use this hardware in its capacity before having to look for upgrades.  Your project will be generally regarded as successful if it meets the technical requirements with minimal involvement from the business (having to ask for more money), especially if you can tell them that this storage will need to be expanded in x number of months.

Factor Three – Data Churn

Churn is freaking scary when it comes to calculating capacity requirements.  Churn is the amount of data change, now some churn is actually growth so we will want to keep that in mind when we are performing our analysis, however churn is the amount of data that changes in a given week. It is amazing the levels of churn that some companies have.  This is especially disconcerting if you plan on utilizing snapshots or send/receive as a form of backup.  Now we still use backups to calculate churn, however instead we will use our mid-week backups instead of our fulls.  Now if you are using incremental backups then you will total all of those during the week to get your dataset size.  If you are using differentials you can calculate this using the final differential in the week.

Now the tricky thing about churn is that growth is churn, but churn is not growth.  So after we calculate our growth and our churn we will subtract our growth from our churn to get our actual churn, otherwise we will be double counting our growth.  I don’t bother doing this until we are talking about the averages and the annualized numbers.  You will notice that these formulas are largely the same as the growth formulas, just remember her that you need to calculate these against your aggregated incrementals or your final differential to get valid numbers.

Calculate Weekly Churn (differentials)

diff_week1 = churn_week1

Calculate Weekly Churn (incrementals)

(inc1_week1 + inc2_week1 + inc3_week1 + inc4_week1 + inc5_week1) = churn_week1

Calculate Weekly Churn Rate

(churn_week1 / full_week0) = churn_rate_week1

Average Weekly Churn

(churn_week1 + churn_week2 + churn_week3 + churn_week4) / number_of_weeks = average_weekly_churn

Average Weekly Churn Rate

(churn_rate_week1 + churn_rate_week2 + churn_rate_week3 + churn_rate_week4) / number_of_weeks = average_weekly_churn_rate

Annualized Weekly Churn

(average_churn_growth * 53) = annual_churn

Expanding on our previous example, if we had churn rates that looked like this…

week1 8
week2 6
week3 5
week4 2
week5 2
week6 15
week7 6
week8 1

Weekly Churn Rate

week1 (8 / 100) = .08 = 8%
week2 (6 / 105) = .057 = 5.7%
week3 (5 / 120) = .042 = 4.2%
week4 (2 / 122) = .016 = 1.6%
week5 (2 / 122) = .016 = 1.6%
week6 (15 / 118) = .127 = 12.7%
week7 (6 / 130) = .046 = 4.6%
week8 (1 / 135) = .007 = .7%

Average Weekly Churn

(8 + 6 + 5 + 2 + 2 + 15 + 6 + 1) / 8 = 5.6

Now this is only partly correct, we still need to account for the growth we have already included  by subtracting the average growth from our average churn.

(5.6 - 4.5) = 1.1

Giving us a growth-adjusted weekly average churn of 1.1.

Average Weekly Churn Rate

(8 + 5.7 + 4.2 + 1.6 + 1.6 + 12.7 + 4.6 + .7) / 8 = 4.9%

Again we must remove our growth.

(4.9 - 4.5) = 0.4

Giving us a growth-adjusted weekly average churn of 0.4%

Annualized Churn

(1.1 * 53) = 58.3

Now that we have calculated our storage needs we can start to work out our final configurations based on the goals of our project.  In our example case we have learned that based on our current dataset (136) and our annualized growth (58.3), which means that in 1 year our dataset will increase by 30% which means that if we had a goal of engineering a system which would not need any upgrades (based on current use cases) in the first two years then you would need a minimum size of 252.6.

Now one final consideration when sizing ZFS is pool capacity, ZFS uses copy on write to turn random writes into sequential writes, this is very good for performance, however when a pool exceeds 80% ZFS will not be able to do this as well and as such your writes will become semi-random (since parts of files will have to be written in non-sequential blocks that happen to be free).  So we should also make sure that we have a 20% ceiling on our projections so that we can ensure the same level of performance throughout the systems life.  And with that we have a final number of 303.1.

Now please keep in mind, these formulas will allow you to work through your own sizing exercise, but I am not suggesting that you have a growth rate of 4.5% and a churn rate of 1.1% this was merely an example, you will need to use your own numbers to come up with projections which are applicable to your scenario.  Also you will notice that in my example I did not use a size metric (MB, GB, TB, PB) I did this intentionally so as to not confuse you these formulas will work regardless of your metric, just ensure you use the same metric in all of your calculations.

Happy Sizing!

 

One thought on “Adventures in ZFS: Storage Sizing

  1. Miro

    Pretty nice article. I would only add that you need to also factor % for things that would affect the storage need growth.

    One need to keep an eye on what drives the storage usage. Are these tons of PDF/DOC/Excel files from Accounting, images from Marketing or other large data sets from Engineering department, where is that data coming from….

    For example: in a company that creates digital radio waves imaging, increasing the number of receiving elements from 1 to 64 could mean 64x increase in storage needed by your Engineering department. This probably doesn’t happen every year so it won’t be on your plan. How do you factor that?

    You have to be close to the business to understand its future needs. If there will be some massive marketing campaign that would bring a lot of new customers and knock your calculations off – you need to be aware of it, and factor it to best you can.