StoreVirtual Storage
1748119 Members
3392 Online
108758 Solutions
New Discussion юеВ

Re: P4000 VSA in Production?

 
Paul Hutchings
Super Advisor

P4000 VSA in Production?

I'm starting to plan what we do in a year or so's time when the lease on our P4000 kit is up.

 

Right now we have a 14.4tb P4500 Virtualisation SAN and a 4x8tb MDL SAS P4300 nodes.

 

We keep our VM's on the P4500 and our file/general storage on the P4300.

 

Of course, the Virtualisation SAN means we have 10x10tb VSA licenses, so one of my thoughts is to keep the P4500 bundle and to buy a pair of servers full of DAS and to drop a couple of VSA's on each.

 

My question is whether anyone has done this in production and if you had any issues?

 

Our file server is basically bulk data rather than anything that requires fast access.  Obviously one thought would be to have some fast and some slow disk but of course there's no automatic tiering with the P4000 which makes it difficult to split file data out by access requirement.

31 REPLIES 31
ccavanna
Advisor

Re: P4000 VSA in Production?

We don't run it in either of our data centers, but we do run vsa in a good majority of our manufacturing facilitys. I can't say that I have had any issues with ours. We are running Dell T610's with 12GB Ram and 8x146GB drives. On those boxes we are also running a win2k8 domain controller and win2k8 sql box running sql 2005. I have never really had any issues with them. I also have a corp site that is running about 10 vm's off of a pair of boxes also running vsa and haven't had an issue. We have 9 sites running that setup now and are doing 15 more this year.

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

I have several VSAs in production - none with that much disk space, but frankly I think due to the nature of the P4000 Multi-pathing (the VIP re-directs clients to multiple back-end providers) I cannot see any reason to avoid it.

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?


@Tedh256 wrote:

I have several VSAs in production - none with that much disk space, but frankly I think due to the nature of the P4000 Multi-pathing (the VIP re-directs clients to multiple back-end providers) I cannot see any reason to avoid it.

 

 


Yeah I think the fact the server(s) using the "bulk" storage are mostly Windows I'm less wary as I can share the load across multiple VSA's - I'd be a little more wary with stuff like VMware where a given VMFS volume is always offered up by a single node.
I guess I always assume there must be a massive performance catch with a VSA because the price of a hardware P4300/P4500 node does not equal the price of a DL180 full of disks + a VSA license or two.
Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

" I'd be a little more wary with stuff like VMware where a given VMFS volume is always offered up by a single node."

 

actually this is inaccurate - the default vsphere 4.1/5.0 MPP fully supports active/active paths using round robin. If you set the iops down to a low value (or 1) then ESX performance will greatly benefit from having multiple nodes.

 

I think you are holding onto old/outdated information ....

 

"I guess I always assume there must be a massive performance catch with a VSA because the price of a hardware P4300/P4500 node does not equal the price of a DL180 full of disks + a VSA license or two."

 

and this was precisely my point - the design of the P4000 redirector is such that the limitations of both 1GB ethernet and "less expensive" disk sub-systems is over-come. Performance can be very very attractive even with a vsa approach.

 

I would not use a DL180 though - unless I had redundant power and a write-cache enabled RAID controller with plenty of cache (not sure what the specs of the 180 are - but I think these tend to be less robust and redundant ....)

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Hmm.. we're running vSphere 4.1 U2 with SAN/iQ 9.0 and for any given VMFS volume all the connections are always to the same node.

 

I didn't think there was a way to get vSphere to spread traffic for the same volume across multiple nodes?

 

Agreed on the DL180 too, though if they're good enough for HP... :)

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

http://www.vmware.com/files/pdf/techpaper/vmw-vsphere-p4000-lefthand-san-solutions.pdf

 

http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA3-6918ENW.pdf

 

the key is having 2 vmk ports, each bound exclusively to a single physical nic. You can have both in one vswitch in which case you need to do a nic teaming over-ride at the vmk port level, so that only one nic is utilitized for each vmk

 

Then you need to bind each vmk to the software iSCSI adapter - in vsphere 4.1 that is accomplished at the command prompt of each host. In vsphere 5 it is built-in to the gui configuration of the iSCSI adapter.

 


iSCSI (bind iscsi initiator to vmkernel ports):
From the command line, bind both VMkernel ports to the software iSCSI adapter. The vmk# and vmhba## must match the correct numbers for the ESX or ESXi server and virtual switch you are configuring, for example:
> vmkiscsi-tool -V -a vmk0 vmhba36
> vmkiscsi-tool -V -a vmk1 vmhba36

Once configured correctly, perform a rescan of the iSCSI adapter. An iSCSI session should be connected for each VMkernel bound to the software iSCSI adapter. This gives each iSCSI LUN two iSCSI paths using two separate physical network adapters.

 

Finally, you need to set the iops to 1, so that after each iop the system switches to the other path:

 

To change all existing datastores to 1 IOPS:

for i in `esxcli storage nmp device list | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -t iops тАУI 1 -d $i; done

To make RR the default PSP for the alua satp (so that new datastores default to RR):

esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Thanks, and to confirm we already have it setup like that.

 

What I'm saying is that if I have, say, two nodes, if vmware-volume1 gets node1 assigned as the gateway node for a connection from my vSphere host, sure I'll have two sessions, one for each VMK, but they'll each go to node1.

 

So, if I have several volumes sure, it balances across the nodes, but any individual volume will only ever be served by a single node so the bottleneck is always the NICs (in reality it's not much of a bottleneck but YKWIM).

 

Do you know with the VSA if you have 10gbps pNICs in the boxes hosting the VSAs if the vNIC limits the throughput for each VSA at one gig?

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

"What I'm saying is that if I have, say, two nodes, if vmware-volume1 gets node1 assigned as the gateway node for a connection from my vSphere host, sure I'll have two sessions, one for each VMK, but they'll each go to node1."

 

That is not my understanding - the VIF will re-direct subsequent requests to other nodes, in "round robin" fashion - it is not a function of the requestor OS at that point, but simply how the P4000 redirector works.

 

I could be wrong, and have not looked at it in depth beyond the docs and beyond what HP SE's have told me, but that has been my understanding. I'd love to know if I am wrong - please shoot me any info you have!

 

"Do you know with the VSA if you have 10gbps pNICs in the boxes hosting the VSAs if the vNIC limits the throughput for each VSA at one gig?"

 

I do not know for sure - but if you are using the vmnet 3 nics in the VSA, they should not be limited to 1GB

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

I think it's just a plain limit/restriction that is present on any OS that doesn't use the DSM MPIO module - so basically anything other than Windows :)

 

I'd love to be proved wrong, but I'm pretty sure I'm not and that a given volume/lun can only ever be served up from a single node and regardless of the number of MPIO connections to that volume, they'll all be served by the same node.