StoreVirtual Storage
1754016 Members
7306 Online
108811 Solutions
New Discussion

Re: vmWare Volumes and Gateway

 
5y53ng
Regular Advisor

Re: vmWare Volumes and Gateway

Hi dcc,

Are your volumes configured as network raid10?
dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

Yep - Network raid 10.

 

dcc

Dirk Trilsbeek
Valued Contributor

Re: vmWare Volumes and Gateway

then they shouldn't be offline. In rare cases the time needed for the remaining nodes to detect the missing node can cause applications to crash (especially MS SQL Server) but usually there is no disconnect. Do you have a failover manager in your cluster? How many nodes?

dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

I do have a failover manager.  I can replicate at will in both my lab (using VSA on ESXi), and in a customers P4300G2 production environment.  In both cases there are two nodes plus a failover manager.

 

When a node reboots, the datastore(s) that node is presenting goes offline.

 

dcc

Dirk Trilsbeek
Valued Contributor

Re: vmWare Volumes and Gateway

That definitly shouldn't happen. Are the volumes offline in the CMC?

dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

No. And that is why I find it weird. (btw - I didn't mean to hijack your thread Peter).

dcc
Dirk Trilsbeek
Valued Contributor

Re: vmWare Volumes and Gateway

but you did connect to these volumes using the cluster VIP, not the node IP address? What kind of client (VMWare ESX, Windows Server etc) does connect to these volumes?

dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

Yes - they are connected to the VIP.  All servers are ESXi 5.1.0, build 914609 (fresh install from VMware-ESXi-5.1.0-799733-HP-5.32.5.iso, then upgraded with VCUM to the current build). 

 

In my lab, I'm have DL380G6s using 1Gbe software iSCSI via two of the NC382i nics (Broadcom hardware iSCSI appears to be flakey at best so I ended up on software iSCSI) configured with dynamic target discovery pointing that VIP's IP (there is also a 4 port NC364T in each host for a total of 8 1GBe ports for VM Network, vMotion, etc). The hosts are connected to a Procurve 2848 and flow control is enabled on the ports that are part of the iSCSI vlan.  From the ESXi hosts, I can ping all IPs associated with the storage system (in it's vlan).

 

In the production environment, I have DL380G7s using NC550 10GBe to connect to the P4300G2 nodes (also at10GBe - a 2nd NC550 is used for the VM Network). Bonding is enabled.  Storage and servers are connected to a Procurve 5400ZL.

 

I pretty much tried to follow "HP LeftHand Storage with VMware vSphere: Design considerations and best pratices (4AA3-6918ENW.pdf)" as much as I could.

 

In my test lab, I've left Jumbo frames turned off (I do believe the VSA does not support enabling Jumbo frames), although in the customer's production environment, Jumbo frames are turned on.

 

To ensure I didn't miss something in my configuration, I used the HP LeftHand CLI Shell to create my volumes, servers, and assignments, and used esxcil to configure my ESXi hosts.

 

To create my volumes in CLI, I used this command:

 

cliq createVolume volumeName=p4000-datastore01 description=p4000-datastore01 clusterName=p4000-cluster01 size=300GB

To create my servers in CLI, I used this command (192.168.11.6 = vcenter, 192.168.222.131 = VSA node 1):

 

cliq createServer serverName=esxi01-vmhba37 description=esxi01-vmhba37 initiator=iqn.1998-01.com.vmware:esxi01-6db9xxxx:37 useCHAP=0 allowiSCSIAccess=1 vipLoadBalance=1 controllingserver=192.168.11.6 login=192.168.222.131 userName=xxxx passWord=xxxx


To assign my servers in CLI, I used this command:

 

cliq assignVolumeToServer volumeName=p4000-datastore01 serverName=esxi01-vmhba37 login=192.168.222.131 userName=xxxx passWord=xxxx

 

To add the dynamic targets to ESXi, I used this command (192.168.222.101 = VIP).

esxcli iscsi adapter discovery sendtarget add -A vmhba37 -a 192.168.222.101:3260

 

I obviously edited and repeated these commands as required...

 

dcc

5y53ng
Regular Advisor

Re: vmWare Volumes and Gateway

When you shut off a node do you see a loss of quorum in the CMC? That's the only thing I can think of based off what I have read in this thread. Make sure you have a manager running on each node.

 

dcolpitts
Frequent Advisor

Re: vmWare Volumes and Gateway

I haven't noticed loss of quorum...  The manager appears to be running.  See the attached text file that has the getNsmInfo output from both VSA nodes plus the FOM.

 

dcc