How to provision compressed arrays

Hello friends! I hope all is well. This week I want to take a brief hiatus from performance and software to bring up a topic that is becoming increasingly important as people move towards the Storwize Gen3 and Flashsystem 9100 hardware. These hardware platforms premiered support for the IBM FlashCore Modules (FCM). Just like with the FS900 AE3, these flash modules do hardware compression at the drive level. This makes them economical and exciting because you get a lot of extra flash space for your money. However, it is vitally important you think before you provision. Lets start by taking a look at how capacity looks in the dashboard in the GUI:
Here we see the system has 21.42TB of physical capacity. However, if we go to create a volume we can create up to 99TB:

This is where things get dangerous. If we check the lsarray output for the RAID set we see:

IBM_FlashSystem:fab3:superuser>lsarray 0 | grep capacity
capacity 99.1TB
physical_capacity 21.42TB
physical_free_capacity 21.42TB
allocated_capacity 2.04TB
effective_used_capacity 365.00MB

This 99.1TB is assuming a compression ratio > 4:1. This isn't exactly realistic in terms of a system with data on it. This means that if you present that 99TB to hosts and only get a 2:1 compression ratio on the data, you will run out of physical space after roughly 40TB of write data. When this happens your array will go to write-protected mode and your systems will lose access to the storage. This is why it is vitally important to plan before you provision using the Compresstimator and only over-provision within the expected compression ratio. It is also very important to monitor the system to make sure what was configured and expected is being achieved.

If you plan to provision one of these FCM arrays behind a SVC or some other storage virtualization engine then it is important to take into consideration whether or not the storage virtualization engine will be doing compression on its own. This is because, if you do compression at the SVC layer you will be doing compression at both the SVC and FCM layers and the FCM layer will only get about 1:1 compression - so you should only provision about 80% the physical capacity of the FCM array to the SVC to maintain performance of the flash.

EDIT: After some conversation with some of my friends, I neglected to discuss one configuration scenario. If you are NOT going to compress/dedup at the SVC (or virtualization layer) it is a general best practice to create a lun of about 5-10% of the physical capacity of the array and fill it with data that compresses and de-duplicates at a 1:1 ratio. This way, in the event you do run out of physical space on the array you have a quick recovery plan (delete this lun with 1:1 savings and immediately gain 5-10% physical free capacity).  After creating this sacrificial LUN, provision the remainder of the storage with a compression ratio in line with your solution sizing (compresstimater, capacity magic, etc.)

Before you run out of space on a FCM array (on Spectrum Virtualize powered systems), the system will generate alerts and notify you that you are running low on space. Event id 20009 (error code 1246) will warn you when an array has only 10% of the physical capacity left free. Event id 20010 (error code 1246) will warn you when an array only has 4% of the physical capacity left free. The system will generate an error 1242 with event id 20011 when only 1% of the physical space is left free. In addition to the risk of going completely out of space, I would expect to notice a sharp decline in array performance for all the reasons I previously laid out in my first blog post.

As always, if you have any questions or concerns please feel free to comment, follow me on Twitter @fincherjc, or on LinkedIn.

Comments

Popular posts from this blog

Why you should always use DRAID

What is a 1920 and why is it happening?

Troubleshooting volume performance in IBM Storage Insights