StoreVirtual Storage
1751720 Members
4739 Online
108781 Solutions
New Discussion юеВ

Witness and Cluster disk cluster size best practise

 
Fred Blum
Valued Contributor

Witness and Cluster disk cluster size best practise


What are performance best practises unit allocation sizes for Witness and Cluster Storage disks?
3 REPLIES 3
Fred Blum
Valued Contributor

Re: Witness and Cluster disk cluster size best practise

W2008 R2 Datacenter, Hyper-V with Cluster Services. Currently the witness disk is 1GB and NTFS default cluster size (4K) basic disk. The CSV Storage disk is 1TB NTFS 64K cluster size.
Looking for the MS doc where I took that from.

Cluster nodes VMs will host terminal server, SQL server, Sharepoint and possibly Progress OpenEdge 10.

Fred Blum
Valued Contributor

Re: Witness and Cluster disk cluster size best practise

2 iSCSI NICs on the server, P4300 2 node SAN, SAN NICS ALB, SAN VIP load balancing, server DSM MPIO 9, device MPIO round robin.

Created two volumes on the SAN Lun1 = thin, lun2 = full. http://technet.microsoft.com/en-us/library/ff182320(WS.10).aspx
In windows 2008 R2 I formated Lun1 with a 64K unit allocation size and Lun2 with the default NTFS unit allocation size 4K. Added them as available Storage and CSVs to the cluster.

Created a VM, installed the OS VHD on the 64K formatted CSV Lun1. Added a SCSI disk to the VM with the 4K formatted CSV, made the VHD a fixed disk.

Ran SQLIO from within the VM against disk E: (4K) and disk C: (64K)

Performance difference seems dramatically in favor of using 64K cluster size. SAN performance 64K CSV shows unrealistic performance when compared to the original SQLIO results from the Physical server before adding the disk to cluster storage/CSV.

Could this be unwanted Windows Hyper-V VM memory related behavior? I checked the driver settings C: (64k) is MSft Virtual HD ATA disk caching is not enabled on the driver. The E: (4K) disk is MSft virtual SCSI disk which has by default caching for better performace enabled!

Cannot test from the physical server itself as using the CSV directly is not supported.

file E:\testfile.dat with 24 threads (0-23) using mask 0x0 (0)
enabling multiple I/Os per thread with 8 outstanding
using specified size: 4096 MB for file: E:\testfile.dat

24 threads writing for 10 secs to file E:\testfile.dat

using 8KB random IOs
4K: IOs/sec: 4533.77 MBs/sec: 35.42 latency metrics: Min_Latency(ms): 7 Avg_Latency(ms): 39 Max_Latency(ms): 1302
64K:IOs/sec: 71611.72 MBs/sec: 559.46 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 2 Max_Latency(ms): 17

24 threads writing for 360 secs to file E:\testfile.dat

using 32KB random IOs
4K: IOs/sec: 3012.58 MBs/sec: 94.14 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 63 Max_Latency(ms): 1344
64K IOs/sec: 53370.55 MBs/sec: 1667.82 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 3 Max_Latency(ms): 58

using 64KB random IOs
4K: IOs/sec: 2130.31 MBs/sec: 133.14 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 89 Max_Latency(ms): 1337
64K: IOs/sec: 38877.31 MBs/sec: 2429.83 Latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 4 Max_Latency(ms): 211

using 128KB random IOs
4K IOs/sec: 892.48 MBs/sec: 111.56 latency metrics: Min_Latency(ms): 11 Avg_Latency(ms): 214 Max_Latency(ms): 54252
IOs/sec: 21444.83 MBs/sec: 2680.60 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 8 Max_Latency(ms): 255

using 256KB random IOs
4K: IOs/sec: 442.05 MBs/sec: 110.51 latency metrics: Min_Latency(ms): 15 Avg_Latency(ms): 433 Max_Latency(ms): 53428
64K: IOs/sec: 11757.47 MBs/sec: 2939.36 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 15 Max_Latency(ms): 125

using 8KB sequential IOs
4K: IOs/sec: 26075.37 MBs/sec: 203.71
latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 6 Max_Latency(ms): 828
64K: IOs/sec: 87513.04 MBs/sec: 683.69
latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 1 Max_Latency(ms): 44

using 32KB sequential IOs
4K: IOs/sec: 7538.19 MBs/sec: 235.56 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 115
64K: IOs/sec: 52401.87 MBs/sec: 1637.55
latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 3 Max_Latency(ms): 35

using 64KB sequential IOs
4K: IOs/sec: 1998.69 MBs/sec: 124.91 latency metrics: Min_Latency(ms): 3 Avg_Latency(ms): 94 Max_Latency(ms): 57312
64K: IOs/sec: 41250.44 MBs/sec: 2578.15
latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 4 Max_Latency(ms): 27

using 128KB sequential IOs
4K: IOs/sec: 803.07 MBs/sec: 100.38 latency metrics: Min_Latency(ms): 10 Avg_Latency(ms): 238 Max_Latency(ms): 54232
64K: IOs/sec: 24469.03 MBs/sec: 3058.62
latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 7 Max_Latency(ms): 44

using 256KB sequential IOs
4K: IOs/sec: 474.31 MBs/sec: 118.57 latency metrics: Min_Latency(ms): 25 Avg_Latency(ms): 404 Max_Latency(ms): 30538
64K: IOs/sec: 11711.90 MBs/sec: 2927.97 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 15 Max_Latency(ms): 197

Reading shows the same difference.

Odd.
Fred Blum
Valued Contributor

Re: Witness and Cluster disk cluster size best practise

I did some more tests and have to conclude that windows artificially boosts only the C: disk performance by memory caching although it is not enabled on the IDE disk driver. Otherwise I can not explain the differences.

I have added another SCSI disk to the V<, fixed VHD, on the 64K CSV

Not seeing the exceptional throughput so it isn't the unit allocation size:

SQLIO 8K random writes for 10 sec

Disk formatted 4K in the VM on a 4K CSV - IOs/sec: 4533.77 MBs/sec: 35.42 latency metrics: Min_Latency(ms): 7 Avg_Latency(ms): 39 Max_Latency(ms): 1302

Disk formatted 4K in the VM on a 64K CSV - IOs/sec: 5170.62 MBs/sec: 40.39 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 36 Max_Latency(ms): 1266

Disk formatted 64K in the VM on a 64K CSV - IOs/sec: 5659.21 MBs/sec: 44.21 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 33 Max_Latency(ms): 1293

I have added another EDI disk to the VM, fixed VHD, on the 64K CSV and isn't EDI versus SCSI disk.

Disk formatted 4K in the VM on a 64K CSV - IOs/sec: 5496.12 MBs/sec: 42.93 llatency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 34 Max_Latency(ms): 1303

Disk formatted 64K in the VM on a 64K CSV - IOs/sec: 5795.11 MBs/sec: 45.27 latency metrics: Min_Latency(ms): 2 Avg_Latency(ms): 32 Max_Latency(ms): 996

Starting to think that I am imagining the SQLIO results so I reran the test:

VM C: IOs/sec: 63954.80 MBs/sec: 499.64 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 2 Max_Latency(ms): 35

499 MBs/SEC to a HP P4300 starter SAN consisting of two SAN nodes, 2x8 spindles, two teamed 1Gb ALB NICS that is unrealisticly high and most be a caching memory issue.

What is the risk of data loss due to memory caching? According to some MS does not do caching in Hyper-V.

Performance
Q. How does Hyper-V├в s disk input/output (IO) compare with a non-virtualised solution?
A. In order to ensure that IO will never be reported complete until it has been written to the physical disk, Hyper-V does not employ any additional disk caching other than that provided by the guest operating system. In certain circumstances, a Hyper-V VM can appear to provide faster disk access than a physical computer because Hyper-V batches up multiple requests and coalesces interrupts for greater efficiency and performance. In Microsft├в s internal testing they also found that:

├в ┬вPass-through disks can sustain physical device throughput.
├в ┬вFixed VHDs can also sustain physical device throughput at the cost of slightly higher CPU usage.
├в ┬вDynamically expanding and differencing VHDs do not usually hit physical throughput numbers due to the overhead of expansion and greater likelihood of disk fragmentation.
http://www.markwilson.co.uk/blog/2009/02/hyper-v-qa.htm