Array Setup and Networking
1748276 Members
4170 Online
108761 Solutions
New Discussion юеВ

SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

 
SOLVED
Go to solution
Sean Patterson_1
Occasional Advisor

SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Currently have it like this:

VM Server 2008 running SQL 2008 (non R2, that's another discussion)

Nimble CS220

Volumes:

SQLDB01 - Data drive

SQLLOG01 - Log Drive

SQLTEMPDB01 - Temp Drive

Using MS iSCSI initiator, i am connected to all 3 volumes and assign drives as D: (Data) L: (Logs) T: (Temp DB).  All formatted as NTFS.

My question is should I drop this config and switch to VMFS drive, each in seperate Volumes on the CS220? What would be best practice. Reason for current setup was ease of migration from physical to VM and utilizing the Nimble unit for snapshots as a addition means of backup for each Volume. Anyone have input on this or a better way?

13 REPLIES 13
alex_goltz
Advisor
Solution

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Hi Sean,

From personal experience, I became a huge advocate for iSCSI Initiator volumes from the guest, when we started hitting unrealistic speeds over 10Gbit.  (Nimble support does a great job of helping, if need be.)
If this SQL server is going to get any sort of serious I/O in its lifetime OR that data volume is large (i.e. 3 TB), I would highly suggest iSCSI Initiator connections.  You'll appreciate the compression levels as well.

One document that you will want to reference during the disk drive configurations, is this one: Proper Block Alignment.pdf

Things that I have found to really screw up a VM config:   Wrong NTFS cluster sizes on VM disks, The wrong MPIO settings for iSCSI, or not having Jumbo frames configured correctly on all the hops between the VM guest and the SAN.

VMFS will work fine if the data drive is small and doesn't do alot of heavy I/O.  (i.e. 100 GB)

However, if I/O is high on a small data volume, VMware may not be able to quiesce properly.  In which you may have to turn to doing iSCSI anyways, thus inheriting VSS snapshots.

Also, if you chose to do iSCSI Initator volumes, VSS snapshot restores are easy.  http://www.youtube.com/watch?v=kPs4VqygMMI


I can provide more insight on this topic if need be.


Also, a big "Thank You" to Nimble Support for their patience and information on this.

Sean Patterson_1
Occasional Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Alex,

As you are showing, our existing setup is correct then. I have 3 seperate volumes, all 1TB in size, the connected by iSCSI initiator from the SQL server. All three connected are Formatted as NTFS. I will be sure to change the block size according to the document you linked. My concern maybe our pathing from ESX to the Nimble. They are set with 2 seperate isolated switches (1 hop) and I havent touched the jumbo frames settings. I will look into that. i think a problem I may have is related to the connections, each being seperate subnets and configured for redundancy. You have answered my first questions, which was asking wether my suggested/initial iSCSI was better than what I have been hearing about VMFS. I will continue the iSCSI initiated connections, as I really like the snapshots Nimble is doing over other forms of backup, and at the SANs level I can connect it to another machine (Physical or Virtual) with no problems. i would love to here how I should configure the MPIO for this network setup if you have time. Thanks for all the help so far.

benwatson87
Valued Contributor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Hello Sean,

I'd setup a SQL box similarly to how you have it ... despite the non-R2-ness. At least you have the right storage in place ;-)

So, three distinct volumes for the SQL elements. You could quite happily have the equivalent C:\ as a VMDK, with the data drives as RDMs or iSCSI connected via the software initiator. My preference would be thus because we can have specific SQL performance policies for the SQL volumes, whilst having the C:\ VMDK sitting on a volume with a VMFS 4/5 performance policy. I think it's most suitable to have it this way around.

Furthermore (and Alex Goltz mentioned both points), if your SQL server is likely to be hit hard in terms of I/O, that'd be another reason to opt for SQL-specific volumes connected via a software initiator so the vmkernel doesn't have to direct and parse the traffic through the VMFS layer. The vCenter-integrated snapshots will struggle and may get stuck if there's any notable I/O to the server (VMware snapshots aren't a patch on Nimble snaps), so SQL VSS is a better option.

In my experience, jumbo frames aren't a critical consideration (hardly worth enabling on a 1Gb array, at all) so no problem there. Just make sure you have it either enabled or disabled on all components in the data path. I think your networking setup sounds perfectly fine, as long as you have redundantly connected interfaces on both initiator and target ports. Whether they sit in a single logical switch using one subnet, or two separate switches using two subnets is immaterial - both work absolutely fine with Nimble and we have customers doing both (and in fact, the array's ability to connect initiator ports to target ports when the management/discovery subnet is different to both, still amazes me!). Your server will need to be able to see the Discovery IP address though, but this can be accessible over a LAN connection.

Regarding MPIO settings, there is a Powershell script the guys at Nimble have kindly put together which identifies all paths from a Nimble array and creates the connections. Very straightforward, and works like a dream (Adam Herbert) The script can be found here: https://connect.nimblestorage.com/docs/DOC-1081

Sean Patterson_1
Occasional Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Ben, I started another discussion in regard to MPIO. We are iSCSI only, would it be best just to use MC/S at the iSCSI level over MPIO? Opinions?

alex_goltz
Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Hey Ben Watson,

Would you agree that Sean Patterson should go into the properties of his volume connection in "iSCSI Initiator Properties", and then create 4 sessions (if he wants to do 2 VM interfaces connected to the 2- 1Gbps links).   And then in each of those sessions (each of which contains a unique source/destination combination), go into "MCS" and choose "Least Queue Depth" for his MCS policy. ?   And then enable MPIO, which consolidates the connections to show one volume in Disk Management.

One thing I've noticed, is that when I enable MPIO, two of the sessions revert back to "Round Robin" on the MCS policy.  So have to change those two back to "Least Queue Depth".

benwatson87
Valued Contributor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Definitely - Least Queue Depth is the recommended MPIO policy for Nimble Storage.

If we're working with a VM but using physical disks, we should pretend we are working in a physical environment (i.e. same number of [virtual] network adapters as a physical machine, connected to the appropriate [virtual] networks). The iSCSI networking should be relevant to this setup.

I can't recommend any more to use a script the guys at Nimble have written; it automates all of the connection creation and simplifies the whole process to just simply running a Powershell script!

john_delnostro
Occasional Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Sean -

A little disclaimer...

I'm a DBA. This is just a suggestion based on my experiences. Just putting my two cents in regarding best practices regarding SQL configuration on the storage side.

I also replied to you in the other discussion regarding MPIO/MCS that you have going so I won't re-hash this here. This is post is specific to your earlier comment on your SQL disk layout. Using VMDKs for SQL wouldn't be ideal for a number of reasons... mainly I/O segregation and overhead of maintaining separate VMDks on different data store LUNS...kinda messy.

Here is a generalized drive layout, each separate LUNS on the nimble array. TEMPDB Data and logs should always be separate. Again for I/O segregation purposes and transaction consistency.

SQL_DATA01

SQL_LOGS01

SQL_TEMPDB_DATA01

SQL_TEMPDB_LOGS01

I've gone even further and used 5 or more LUNS per SQL instance in a clustered environment and using mount points makes the logical naming and organization even better... as follows...

SQL_BASE (Binaries etc) let's call it E:. With mountpoints it would look like this..

The folder structure on E: or SQL_BASE would be SQLDATA ==> DEFAULT (name of your SQL instance) ==> TEMPDB (Your TEMPDB LUNS go in the this folder as mount points) Your DATA and LOG LUNS go under the DEFAULT folder as mount points.

E:\

E:\SQLDATA\DEFAULT\SQL_DATA01 (SQL_DATA01 LUN)

E:\SQLDATA\DEFAULT\SQL_LOGS01 (SQL_LOGS01 LUN)

E:\SQLDATA\DEFAULT\TEMPDB\SQL_TEMPDB_DATA01 (SQL_TEMPDB_DATA01 LUN) - Multiple data files based on the number of cores you have. Ideally 4 or 8 at the most. When you pick how many you need each of the files should be autogrown to their max possible size. For example if you have 4 CPU cores and 99 GB available on the TEMPDB DATA LUN you would split the 1 MDF file into 4 ... each file size should be pre grown to 25,344 a piece totaling 99GB. Autogrowth should be turned off for log and data (just for TEMPDB). Understanding your requirements is key here. Otherwise you can go the other way and cause problems. So far I have yet to see this happen.. thankfully.

E:\SQLDATA\DEFAULT\TEMPDB\SQL_TEMPDB_LOG01 (SQL_TEMPDB_LOGS01 LUN) only 1 logs file fully grown to 99% of the disk size is necessary

This particular setup for a one instance non-clustered SQL instance and definitely not necessary (using mount points) but separate LUNs at 64K block sizes is definitely required and I wouldn't have anything less than 4 as indicated above.

This mount point setup works absolutely perfect with clustering because the mount points (your separate LUNS) are all tied to the BASE drive. When you add this storage setup to available storage on the cluster everything logically makes sense and is easy to create dependencies.

Let me know if you have questions. Again keep TEMPDB data and LOGS always on separate drives.

Cheers,

JD

alex_goltz
Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

This is good information from John DelNostro.

Scenarios, like you are showing here, should be shared on NimbleConnect so there is less of a disconnect between the DBAs and the storage Admins. Alot of environments and workloads are different.

One thing we did on our Nimbles, is we created a Performance Policy just for a TempDB volume.

8KB block size, Compress On, Cache Off       And then a 64KB NTFS cluster size during VM drive formatting.

In your opinion, would this be helpful for storage admins that need to "section off" TempDB for larger deployments?


Our TempDB volumes are not very big, so putting our TempDB logs on a 4KB block sized volume didn't seem necessary.

If I'm reading this correctly, you have taken it one step further to get the block size and I/O maximized.  If customers have large TempDB databases, I think your mount point setup would be very necessary.  Is this right?

john_delnostro
Occasional Advisor

Re: SQL 2008 on VM Server 2008 setup for Data Disk/Log Disk/Temp Disk

Alex -

Your performance policy for TEMPDB is good. 8K Storage Block size is recommended for the NIMBLE volume.  64KB block size is a Microsoft best practice for optimal SQL reads/writes at the NTFS file system level. My suggestion is to remove compression and enable caching instead... the way I look at it... no sense in having any overhead (on the Nimble processing) compressing the TEMPDB files... the data inside is temporary anyways. You should be using pre allocated files as they are contiguous and are only fragmented temporarily internal to the file... not on the file system. Tempdb is nothing more than work tables and temporary storage. Caching will allow the Nimble to shift "hot spots" of the contiguous TEMPDB files by pushing those "hot spots" of the files on and off the internal SSD's, which is why pre-allocated TEMPDB files work even better with Nimble... outside of doing it anyways as a best practice.

So here is an example ... let's say you have two 25 GB LUNS for TEMPDB data and logs respectively. Your SQL server has 4 cores. The optimal layout is as follows...

Disclaimer :

This is a general rule of thumb and may need to be tweaked depending on your system i.e. OLTP, Reporting, or data warehousing etc and TEMPDB usage. I've never seen it be a negative impact in my experience to follow this rule of thumb but I do know everyone has different requirements.

TEMPDB_DATA01 LUN (you want them 99% full for contiguous files so.. .99 * 25 = 24.75. 24.75 * 1024 = 25344 then divide this by 4 because you have 4 CPU cores. Look online for TEMPDB sizing and file allocation there are a lot of resources explaining why this is best practices. Each TEMPDB data file will be 6336 and you will have 4 contiguous files on the drive) and you will have 1 contiguous LOG file for TEMPDB because you only need 1 LOG file. Autogrowth settings should be off but doing so you will have to be careful your application/db's do not need more space.

tempdb.mdf 6,336

tempdb2.ndf 6,336

tempdb3.ndf 6,336

tempdb4.ndf 6,336

TEMPDB_LOGS01 LUN

tempdb.ldf 25,344

hope this helps and makes sense... good question by the way.

JD