Array Performance and Data Protection
Showing results for 
Search instead for 
Did you mean: 

SQL Performance Policy

Go to solution
Occasional Contributor

SQL Performance Policy

I've been reading all I can find regarding the performance policies for SQL server.

I've had a hard time finding information specific to the data, log and tempdb performance profiles although I believe that previous versions of Nimble came with those?

Assuming in-guest iSCSI...

SQL Server 2014

Windows 2012 R2

Vsphere 5.5

Block Size

It seems that for VMWare & SQL Server on the DB and Log LUNs the NTFS block size and the Nimble block size should be 8K, but I think i can be fairly certain I can go with 64K for tempdb on both. ?

Then comes caching.

Generally speaking, you want to disable caching all the way through the stack as far as controllers, etc. go for SQL (because it is such a nutty, unpredictable beast) but of course the point of Nimble is to cache things ON the array...

Does the "cache" check box on the performance policy refer to the SAN or some intermediate point in between? 


when is it compressed? Does it affect reads?

I don't see much practical point of compressing SQL t-log files.

Perhaps tempdb shouldn't be compressed, since we aren't really storing it long-term anyway and there would be no benefit whatsoever to compressing it?

HPE Blogger

Re: SQL Performance Policy

Hello Amanda!

Nimble Storage arrays do come with pre-configured performance profiles, and you also have the flexibility to create more to suit your current deployment needs. With regards to MSSQL, the default policies are for database and log files,
8k and 4k respectively for MSSQL 2012 and later.

For the block size that you chose for SQL, Microsoft recommends database and log files both be set to an NTFS cluster size of 64k (use the /L option during format). Tempdb is no different with regards to this recommendation. From a Nimble performance policy perspective, you can chose the block size to 8K for databases (including tempdb), and 4K for logs, as a pretty safe setup for OLTP databases. But, depending on the type of database you are running or its performance characteristics, it may make sense to create a custom performance policy. For example, for a DSS system, it makes sense to think about a custom performance policy with 16K or even 32K as the block size. However, the NTFS cluster size will still be 64K.

When you are looking at a performance policy, the caching checkbox determines whether SSD caching is used for the volume, and this is array side caching for read workloads. While there are workloads that will read back transaction log data (ie. log backups, database recovery, or some forms of replication), for the most part, the transaction log is typically not read back. That is why the default performance policy for transaction logs has caching disabled, to avoid cache pollution.

Compression is inline and automatically enabled on the default performance policies, and I recommend keeping it enabled. The cost of compression is at the CPU, and our CASL file system has been designed to work better with compression. With SQL in particular, we average 2.x or better compression savings across databases (and that is a real number, based on our InfoSight analytics). However, you can test your own custom performance policies to see what works best in your environment. We offer direct insight into how much data is being compressed per volume.

I hope this helps!


Occasional Contributor

Re: SQL Performance Policy

Why the 4k for performance policy for log files, out of curiosity?

I found the "Block alignment for best performance on Nimble Storage (20130326-A)" and I'm wondering if it is up to date, or if there is a newer version?