HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

P4000 VSA in Production?

Paul Hutchings
Super Advisor

P4000 VSA in Production?

I'm starting to plan what we do in a year or so's time when the lease on our P4000 kit is up.

 

Right now we have a 14.4tb P4500 Virtualisation SAN and a 4x8tb MDL SAS P4300 nodes.

 

We keep our VM's on the P4500 and our file/general storage on the P4300.

 

Of course, the Virtualisation SAN means we have 10x10tb VSA licenses, so one of my thoughts is to keep the P4500 bundle and to buy a pair of servers full of DAS and to drop a couple of VSA's on each.

 

My question is whether anyone has done this in production and if you had any issues?

 

Our file server is basically bulk data rather than anything that requires fast access.  Obviously one thought would be to have some fast and some slow disk but of course there's no automatic tiering with the P4000 which makes it difficult to split file data out by access requirement.

31 REPLIES
ccavanna
Advisor

Re: P4000 VSA in Production?

We don't run it in either of our data centers, but we do run vsa in a good majority of our manufacturing facilitys. I can't say that I have had any issues with ours. We are running Dell T610's with 12GB Ram and 8x146GB drives. On those boxes we are also running a win2k8 domain controller and win2k8 sql box running sql 2005. I have never really had any issues with them. I also have a corp site that is running about 10 vm's off of a pair of boxes also running vsa and haven't had an issue. We have 9 sites running that setup now and are doing 15 more this year.

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

I have several VSAs in production - none with that much disk space, but frankly I think due to the nature of the P4000 Multi-pathing (the VIP re-directs clients to multiple back-end providers) I cannot see any reason to avoid it.

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?


Tedh256 wrote:

I have several VSAs in production - none with that much disk space, but frankly I think due to the nature of the P4000 Multi-pathing (the VIP re-directs clients to multiple back-end providers) I cannot see any reason to avoid it.

 

 


Yeah I think the fact the server(s) using the "bulk" storage are mostly Windows I'm less wary as I can share the load across multiple VSA's - I'd be a little more wary with stuff like VMware where a given VMFS volume is always offered up by a single node.
I guess I always assume there must be a massive performance catch with a VSA because the price of a hardware P4300/P4500 node does not equal the price of a DL180 full of disks + a VSA license or two.
Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

" I'd be a little more wary with stuff like VMware where a given VMFS volume is always offered up by a single node."

 

actually this is inaccurate - the default vsphere 4.1/5.0 MPP fully supports active/active paths using round robin. If you set the iops down to a low value (or 1) then ESX performance will greatly benefit from having multiple nodes.

 

I think you are holding onto old/outdated information ....

 

"I guess I always assume there must be a massive performance catch with a VSA because the price of a hardware P4300/P4500 node does not equal the price of a DL180 full of disks + a VSA license or two."

 

and this was precisely my point - the design of the P4000 redirector is such that the limitations of both 1GB ethernet and "less expensive" disk sub-systems is over-come. Performance can be very very attractive even with a vsa approach.

 

I would not use a DL180 though - unless I had redundant power and a write-cache enabled RAID controller with plenty of cache (not sure what the specs of the 180 are - but I think these tend to be less robust and redundant ....)

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Hmm.. we're running vSphere 4.1 U2 with SAN/iQ 9.0 and for any given VMFS volume all the connections are always to the same node.

 

I didn't think there was a way to get vSphere to spread traffic for the same volume across multiple nodes?

 

Agreed on the DL180 too, though if they're good enough for HP... :)

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

http://www.vmware.com/files/pdf/techpaper/vmw-vsphere-p4000-lefthand-san-solutions.pdf

 

http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA3-6918ENW.pdf

 

the key is having 2 vmk ports, each bound exclusively to a single physical nic. You can have both in one vswitch in which case you need to do a nic teaming over-ride at the vmk port level, so that only one nic is utilitized for each vmk

 

Then you need to bind each vmk to the software iSCSI adapter - in vsphere 4.1 that is accomplished at the command prompt of each host. In vsphere 5 it is built-in to the gui configuration of the iSCSI adapter.

 


iSCSI (bind iscsi initiator to vmkernel ports):
From the command line, bind both VMkernel ports to the software iSCSI adapter. The vmk# and vmhba## must match the correct numbers for the ESX or ESXi server and virtual switch you are configuring, for example:
> vmkiscsi-tool -V -a vmk0 vmhba36
> vmkiscsi-tool -V -a vmk1 vmhba36

Once configured correctly, perform a rescan of the iSCSI adapter. An iSCSI session should be connected for each VMkernel bound to the software iSCSI adapter. This gives each iSCSI LUN two iSCSI paths using two separate physical network adapters.

 

Finally, you need to set the iops to 1, so that after each iop the system switches to the other path:

 

To change all existing datastores to 1 IOPS:

for i in `esxcli storage nmp device list | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -t iops –I 1 -d $i; done

To make RR the default PSP for the alua satp (so that new datastores default to RR):

esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Thanks, and to confirm we already have it setup like that.

 

What I'm saying is that if I have, say, two nodes, if vmware-volume1 gets node1 assigned as the gateway node for a connection from my vSphere host, sure I'll have two sessions, one for each VMK, but they'll each go to node1.

 

So, if I have several volumes sure, it balances across the nodes, but any individual volume will only ever be served by a single node so the bottleneck is always the NICs (in reality it's not much of a bottleneck but YKWIM).

 

Do you know with the VSA if you have 10gbps pNICs in the boxes hosting the VSAs if the vNIC limits the throughput for each VSA at one gig?

Tedh256
Frequent Advisor

Re: P4000 VSA in Production?

"What I'm saying is that if I have, say, two nodes, if vmware-volume1 gets node1 assigned as the gateway node for a connection from my vSphere host, sure I'll have two sessions, one for each VMK, but they'll each go to node1."

 

That is not my understanding - the VIF will re-direct subsequent requests to other nodes, in "round robin" fashion - it is not a function of the requestor OS at that point, but simply how the P4000 redirector works.

 

I could be wrong, and have not looked at it in depth beyond the docs and beyond what HP SE's have told me, but that has been my understanding. I'd love to know if I am wrong - please shoot me any info you have!

 

"Do you know with the VSA if you have 10gbps pNICs in the boxes hosting the VSAs if the vNIC limits the throughput for each VSA at one gig?"

 

I do not know for sure - but if you are using the vmnet 3 nics in the VSA, they should not be limited to 1GB

 

 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

I think it's just a plain limit/restriction that is present on any OS that doesn't use the DSM MPIO module - so basically anything other than Windows :)

 

I'd love to be proved wrong, but I'm pretty sure I'm not and that a given volume/lun can only ever be served up from a single node and regardless of the number of MPIO connections to that volume, they'll all be served by the same node.

5y53ng
Regular Advisor

Re: P4000 VSA in Production?

I just wanted to add to Ted's post about settings the number of IOPS per path. This setting is not persistant in ESX(i). You can add a script to rc.local to reapply the IOPS setting after rebooting your hosts. The process to do so is detailed in the comments section of this article...

 

Look for a post in the comments by "Matt"

 

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html

 

I would also like to add that I have 40 VSAs in seven clusters at four sites hosting approximately 300TB of storage and I have never had any issue. The most common nuisance I encounter is when an admin unknowingly powers-off adjacent nodes in a cluster.

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Thanks, that's very useful to know - are yours a mix of usage or all low IO?

 

In principle I'd have no objection to using them for everything, but again I'm assuming there's some artificial bottleneck beyond which it doens't matter what hardware you throw at a VSA (else why pay $150k for an SSD P4x00 when you could DIY with consumer SSDs and use RAID/Network RAID to provide redundancy?).

5y53ng
Regular Advisor

Re: P4000 VSA in Production?

The environment is a mix in terms of disk usage, but there are some greedy SQL servers in our systems. My reason for going with the VSA was due to size contraints. There simply was not enough room for multiple 2U (or bigger) storage systems. The VSA was a savior with that in mind. This allowed me to allocate all of my space to max out CPUs, memory, and connectivity.

 

If it makes you feel any better I have managed to push 20k+ IOPS with a six node cluster on 36 spindles. Don't ask about que depths, but it can do it...

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Blimey, that's pretty encouraging!  We don't push anything like that tbh.  We "only" have a 2 node P4500 for our VM's which has been sufficient.

 

I have this dream where I go out and buy a couple of Dell T620's or something similar, stuffed with 2.5" spindles and just stick a bunch of VSA's on them, just not sure if I'd be setting myself up for a fall, though I don't see how as there'd be so many layer of redundancy at hardware and node level.

5y53ng
Regular Advisor

Re: P4000 VSA in Production?

The VSA works pretty good IMO, definitely not the fastest setup, but it gets the job done. Ours have been in production for three years now. They are very reliable, just make sure everyone understand who works with them understands managers and how replication works and you're good to go.

ccavanna
Advisor

Re: P4000 VSA in Production?

I have a pair of Dell R710's running a vsa cluster. They have LFF 600GB 15k SAS drives and have pushed 10k+ iops while doing storage vmotion's. We have 9 cluster's over 9 sites, this year we are implementing 15 more locations with vsa clusters. The only thing I need to figure out to move everything to esxi is how to get the storage to automatically rescan after a reboot or power outage so everything boots on its own and I don't have to manually intervene. 

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Thanks both.  Can I ask you whether your VMs run on the same physical hosts that the VSA's run on?

 

I can see a pair of servers with lots of spindles and a lot of spare CPU and RAM, but it sounds simpler/neater splitting out the roles so that the boxes that do storage only do storage (with the VSA's set to automatic start), and the boxes that run guests only running guests.

 

It seems too messy otherwise trying to get the "all-in-one" solution to boot up cleanly etc.

5y53ng
Regular Advisor

Re: P4000 VSA in Production?

I agree getting the hosts to see the storage is difficult after a power outage. Since all of the storage nodes are powered off, none of the volumes are available.

 

If only HP and and VMware would team up to come up with a way to perform a delayed rescan... :)

 

Scripting the rescan wouldn't be difficult, but the timing is. I wonder if pinging the VIP would be a valid test for a script to continue and start a rescan, maybe then power up all the virtual machines? If the VIP were available I suppose you could retrieve a list of targests, but whether or not the targets are available is another story...

 

 

ccavanna
Advisor

Re: P4000 VSA in Production?

Well... We are running ESX 4.1 U2 on all of our VSA's and yes we are running vm's off the same hosts. We run a sql server and a domain controller split over the 2 hosts. One thing to watch out for is network loops. We have had that happen in the facilities from someone in the plant causing it and had to reboot the hosts and everything. That is the only real issue i've had over the past year with them and its only happened twice. I found a script on the old forums and have been using that with some tweaks of my own. It works great for power outage type senario's and truthfully we have had several power outages in our manufacturing facilities and no one has had to interviene yet everything has came back up cleanly and ran like a top. 

 

The reason why we haven't moved to ESXi is because HP and vMware haven't made it easy to do it. But i've heard its possible with the vMA and some pearl scripting. I just have not had any time to investigate that since I am currently working on replacing our current infrastructure Blades chassis, blades, and about 90TB worth of p4500 sas and MDL sas.

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Thanks both.  All very encouraging.  It also leans me towards dedicated VSA boxes, which isn't that big of a deal given how cheap and full you can stuff a 2U full of disks.

Paul Hutchings
Super Advisor

Re: P4000 VSA in Production?

Back on this.  Does anyone know how maintenance/support on the VSA's works?

 

We got a 10 pack with our virtualisation SAN so I'm assuming that if we renew the care pack on the SAN SKU it also covers support on the VSAs?

 

I can't see a single SKU for VSA support anywhere though.

ccavanna
Advisor

Re: P4000 VSA in Production?

This is the SKU we use. As we have purchased additional VSA licenses over the 10. The purchase of an additional license includes 1 yr of support. To renew the support on them the SKU i have is UW577E.

 

I am also under the impression that when you use the licenses off the 10 pack the support goes against the physical node but i am not sure of that.

David_Tocker
Regular Advisor

Re: P4000 VSA in Production?

A quick idea - since most lease companies have already collected the price of the equipment they often will sell the gear to you at a resonable cost - perhaps find out how much it will cost to buy the 3yr? old gear - it may be less than you expect.

Regards.

David Tocker
Nate Stuyvesant
Occasional Visitor

Re: P4000 VSA in Production?

Are the HP P4000/4500 (fka LeftHand SAN/iQ) modules ALUA-compliant? If not, would this be correct:

esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_DEFAULT_AA

instead of

esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp VMW_SATP_ALUA

5y53ng
Regular Advisor

Re: P4000 VSA in Production?

Not sure, but I am using VMW_SATP_DEFAULT_AA without any problems.