StoreVirtual Storage
1753481 Members
4505 Online
108794 Solutions
New Discussion юеВ

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

 
RonsDavis
Frequent Advisor

P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

Test conditions: 3 Sun Fire X4270 servers, each with 2 Xeon 5520 cpus, 8 cores total. 96G RAM each. Running ESXi 5.0 unpatched.

2 of these were set up with 10 146G 10k 2.5" Hard drives to the built in Adaptec RAID controller. 2 Drives were used to house the VSA boot disks, and the other 8 were setup in a RAID 5 array for the SAN Data disks. The only VM running on these boxes was the VSA. The VSA was configured with 2 CPUs, and 4G RAM.

The other server had no local storage. It ran two VMs, a Virtual Center instance with 2 vCPUs and a testing VM with 4 cpus, and 2G RAM. The test machine was running Windows Server 2008 R2, with IOMeter installed. These two VMs ran off of my production P4000 SAN.

Each machine had two dedicated 1g NICs for the iSCSI network.

For each test I always made sure to give the VSA eager-zeroed disks. The test VM was assigned a Full Provisioned disk on the CMC side, and an RDM on the VMware side.

IOMeter was tested on a formatted drive, for some reason IOMeter wouldn't use the blank disk. Test was set up for 4 hours, with the All-In-One pattern. 32 outstanding IOs were set.

I was unable to test RDM vs VMDK for the VSA data disks, RDMs just don't woth with the adaptec. If I can test it with another controller I'll post again at a later date.

I'll attach the spread sheet with the results, but here is a summary.

Hw version 4 to HW version 8: 67% faster throughput, 40% lower latency

HW 4 to 7: 12% faster thoughput, 10% lower latency

HW 7 to 8: 42% faster thoughput, 30% lower latency

flexible nic to vmnet3: 0.6% lower throughput, 0.6% lower latency

SAN/IQ version 9 to 9.5: 5% higher throughput, 5% lower latency

 

This gives us 77% higher throughput, and 43% lower latency moving from SAN/IQ 9 to 9.5, and hw version 4 to hw version 8.

 

 

6 REPLIES 6
cheazell
Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

Thank you for this.

 

In terms of performing some of the upgrades that you mention. In a production situation (with 2 node software VSA + FOM) I'm curious about doing the Hardware Version of the VSA from 4 to 8. Would you go from 9.0 to 9.5 first and then do the VM Hardware upgrade. Can you go straight to 8 and would you do the Nic change over too? Will this affect the MAC address of the new NIC?

 

 

RonsDavis
Frequent Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

You can change to HW 8 without upgrading to 9.5 if you wanted to, they are mutually exclusive.

You tell the node to shut down in the CMC, and right click on the VM in your vSphere Client, and Upgrade Virtual Hardware. I then changed the OS to CentOS (4/5/6) 64-bit to match what the shipping 9.5 OVF is set to.

That change is required in order to change the nic to vmnet3. Based on the results I got, I would not change the nic unless I was upgrading to a 10G network. the change from flexible to vmnet3 was a wash.

If you do decide to change the nic though, copy the MAC address of the current nic, remove it, and add the new nic. Then set the MAC manually to the old MAC address. This is VERY important. If you do not, then the cluster sees the node as a brand new node, and it has to rebuild. And you have to get the licensed replaced.

The 9.5 upgrade is easy enough using the CMC. You could do it before or after changing the HW. I would probably do it before changing the nic though.  

cheazell
Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

I've done the HW upgrade in a test situation to HW 7 but not to 8.

 

I'm essentially on a stretch cluster and I'm seeing reasonable Read Latency but the Write Latency is high. I attribute at least part of this to the hit for Network Raid Mirroring. So I'm interested in anything that might help reduce any latency.

 

I've also given four dedicated NICs to my iSCSI SAN Network per host (4 Hosts) and I'm wondering if that is too many iscsi sessions for my Volumes on the two nodes.

RonsDavis
Frequent Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

It probably isn't too many sessions. But you do have to ask whether you get any benefit from it. If you aren't maxxing out your connections, then it probably isn't helping. It probably also isn't hurting. any more won;t help though, since you can only have 8 paths per LUN on VMware (if you are using VMware).

With the flxible nic you are limited to 1GB of throughput regardless. You may be able to set the VMNet3 to 10G and get better throughput, but I didn't test that. :)

 

RonsDavis
Frequent Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

Nevermind, just looked, and it looks like the Vmxnet3 is set to 10G by default.

In my tests I didn't exceed 1G of throughput, so that wasn't a limit. My disks wouldn't handle it in a perfect world regardless.

 

cheazell
Advisor

Re: P4000 VSA Performance Testing. Flexible vs. VMNet3. 9.0 vs 9.5, hw version 4 vs.7 vs. 8

I am on ESXi 5. On the hosts I get Latency warnings although for the most part things seem fine. I have 2 nics attached to a 2910 at one location and the other 2 nics to another 2910 in another location.

 

I'm considering dropping to 1 connection to each switch just to reduce the number of sessions.