ProLiant Servers (ML,DL,SL)
1761430 Members
3472 Online
108902 Solutions
New Discussion

P420/i Array Configuration Recommendations for SQL Server

 
tekanive
Occasional Contributor

P420/i Array Configuration Recommendations for SQL Server

Have a new DL380 G8 server that I'll be installing MS SQL 2012 on, under Windows 2008 R2. The DB/application in question is write heavy. I was wondering if anyone from HP, or others with experience, could provide me any recommendations on the configuration of the logical disks. I'm looking for the best performance for the write heavy SQL DB's/logs. I've read so many mixed opinions on how to configure these disks for the best performance, and everything starts with the hardware vendor recommendations. I'd like confirmation or opinions on how best to configure these disks.

 

 

Add-On Controller P420 Controller – 90% write, 10% read cache ratio

C: - OS, SQL Application, and Page File – 2x146GB – RAID1 – 64K Strip, 256K Full Stripe Size

E: - .LDF Log Files – 2x146GB – RAID1 – 64K Strip, 64K Full Stripe Size

G: - Backups – 2x450GB – RAID0 – 64K Strip, 128K Full Stripe Size

 

Built-In P420i Controller – 90% write, 10% read cache ratio

D: - .MDF Data Files – 4x450GB – RAID10 – 64K Strip, 128K Full Stripe Size

F: - TempDB Files – 2x300GB – RAID1 – 64K Strip, 64K Full Stripe Size

 

For all disks except C:

Partition offset configured for 64k. NTFS allocation unit size also set to 64k.

C: configured for 1024k offset and default allocation unit size.

 

Do these values make sense for what I am looking to do?

 

Thanks,

Tim

1 REPLY 1
Skotte
Frequent Advisor

Re: P420/i Array Configuration Recommendations for SQL Server

Hi Tim,

 

I am not an HP employee, but I do have experience with SQL server and some experience with the P420 controllers.

 

Firstly, It's important to understand what happens in a write heavy scenario.

 

SQL Server’s lazy writer, worker threads, and other processes do 8KB random I/Os, so the disk cache will be under heavy stress, consider more disks for the SQL data array.

 

You must be able to guarantee that transaction-log writes will be completed under all circumstances.

When SQL Server sends a write command to the controller, the controller will save the data in cache and immediately send an I/O Complete message back to SQL Server.

The controller's purpose is to batch together multiple smaller I/Os into a single larger sequential I/O, instead of many smaller random I/Os.

 

SQL Server continues to work, even though the I/O has not actually been committed to disk.

As long as the I/O eventually makes it to disk, life is good; but if the I/O gets lost for any reason, SQL recovery will not work, because part of the log is missing.

 

In an OLAP database, the database workload not only returns more data per request than the OLTP, but the queries may be more complex and have more table joins, therefore requiring more system CPU work to deliver the final data dataset to the users.

As a result, the user response time must include that in addition to the controller's throughput.

 

Separate the data and transaction types onto separate controllers:

 

I/O to SQL data files is very different in nature from I/O to the SQL log file which is different again from tempdb.

 

SQL data traffic is random in nature with relatively larger block sizes, occasionally becoming large sequential I/O for large table scans.

SQL log traffic is sequential in nature and is predominantly write until a checkpoint occurs, when some read activity will occur.

I/O in tempdb is always transitory.

 

We use the following setup as a guideline for write heavy scenarios:

Controller 1:

Array 1: (OS / SQL Serevr files / System DBs)

RAID 1 2 disks

OS / SQL Server / system databases

OS and SQL binary files don’t have high I/O requirements; they are typically read once when the application is loaded, and then not touched until new code paths need to be used, and then a few more 4KB random reads are issued. Therefore, these files don’t require high-performance storage.

 

Array 2: (Log files)

RAID 10 4 disks

The backups are the most important files for SQL server, as explained above, the I/O requirements are different from the data I/O, it's best to use a separate controller where possible.

 

Controller 2:

Array 1 (MDF (data) files)

RAID 10 4 - 8 disks

The database MDF files in a write heavy scenario require RAID 10 to increase I/O throughput.

This is why it has been issues its own controller, where 500+ users are heads down / data entry, the data writes can be quite intensive.

Using a fast and dedicated I/O subsystem for database files enables it to perform most efficiently.

Data can be stored according to usage across data files and filegroups that span many physical disks.

 

TempDB:

Place tempdb on a RAM disk.

Tempdb is the only database that returns to the original default size, and it can then grow from there based on usage requirements.

During the autogrow operation, threads can lock database resources during the database-growth operation, affecting server concurrency.

 

Do not consider placing TempDB and the SQL data file together onto the same controller ina  wrie heavy scenario, TempDB data is transitory, and will only block the cache, and the controller with data which must be written twice...

 

I hope this has explained some of the requirements of a write heavy SQL server database setup.

 

Cheers

 

Leon