HPE Nimble Storage Solution Specialists
1833696 Members
3194 Online
110062 Solutions
New Discussion

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

 
ali_mir
Occasional Contributor

vVol based VM's disks are connected simultaneously from 2 different hosts

Hello Everyone,

This is my very first post regarding Nimble storage array's under this forum, so apologiez if I post it under wrong location.

We have a 4 node cluster:

  • ESXi 7U2
  • iSCSI based connection to vVol
  • All best practices is followed as per VMware documentation for iSCSI connection and port binding

Storage:

  • HPE HF20 on 5.2.1.700
  • It hosts mixture of VMFS and vVol
  • Storage is working Active/Standby

Issue we are currently facing:

Some VM's vmdk are getting connected (not accessed) from 2 or sometimes 3 hosts simultaneously (This is visible only from SAN web gui) We followed HPE Nimble support and VMware advise to turn off "iSCSIunsupportedblockandpages" which helped to fixed some issues to certain extend, but did not the problem mentioned in Subject.

(What we originally were facing was, no storage API were able to get executed against some VMs such as vMotion or Snapshot. Those VMs showed 0B in space; that issue is gone by the change above, but still some VMs are getting accessed by 2 or more hosts)

We already opened couple tickets for 2 locations showing similar symtoms, but to this date it is not fixed

11 REPLIES 11
support_s
System Recommended

Query: vVol based VM's disks are accessed simultaneously from 2 different hosts

System recommended content:

1. HPE Serviceguard for Linux with VMware virtual machines

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".

 

Thank you for being a HPE valuable community member.


Accept or Kudo

mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

You mentioned port binding.  Do you have any active port binding?  How many iSCSI vmknics do you have, and are they on the same L3?

HPE Nimble Storage
mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

Re: subject of this post.  When a VM is live-migrating from one host to another, you will see the disks have connections from both hosts for a brief period of time. 

Is the vvol datastore still reporting size as zero bytes in vCenter UI?

Do you have an active case open with Nimble Support?

HPE Nimble Storage
ali_mir
Occasional Contributor

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

  • vCenter UI reporting size is fixed by implementing the unofficial change related to ESXi 7 (Disabling iSCSIUnsupportedBlockandPages"
  • It is not brief connection to 2 or more hosts, but continously
  • binding is done following best practices and it is on L2, no routing is happening in between
mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

There are multiple (more than 1) vmknics, and they are on different L3 networks, and iSCSI port-binding is enabled?

HPE Nimble Storage
ali_mir
Occasional Contributor

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

iSCSI network:

  • 2 vmnic (physical ports)
  • 2 VMK each under seperate PG
  • Each PG has static load balancing Active/Inactive to vmnicN & vmnicN+1

Port binding:

  • Port binding is set to use both iSCSI VMKs
  • Both ESXi iSCSI initiator and iSCSI traffic on SAN are on layer 2 (no Layer3)
mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

The IP subnets of the bound vmknics: are they same or different?

I'm very sorry for asking this question again, but I do not understand it yet from your answers.

HPE Nimble Storage
ali_mir
Occasional Contributor

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

No problem.
But IP subnet from all VMNICs assocated with iSCSI VMKs to SAN iSCSI traffic itself are all same (Which means working on Layer 2) 

Layer 3  routing is involved, so when I said layer 2 I meant all subnets are the same

mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

What we need to know is the IPs of the vmknics and their netmasks => are they different L3s?

You do not have to share your exact IPs but let me know please.  e.g.:
* vmk1 (iSCSI1): A.B.1.2/16
* vmk2 (iSCSI2): A.B.3.4/16
Here, vmk1 and vmk2 are on DIFFERENT L3.

HPE Nimble Storage
ali_mir
Occasional Contributor

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

That is exactly what I said I think
All VMKs are sharing same IP:
VMK1 (iSCSI-1 PG) = A.B.C.15/24
VMK1 (iSCSI-2 PG) = A.B.C.16/24

SAN IP range (Nimble managed) = A.B.C.20/24 - A.B.C.28/24

 

 

mamatadesaiNim
HPE Blogger

Re: vVol based VM's disks are connected simultaneously from 2 different hosts

Thanks for confirming.  Let's look at a little more debugging.  Please let me know which ones fail.

* Run "esxcli storage vvol storagecontainer list" on all ESXi and make sure the Size(MB) shows up as the right expected number, and "Accessible: true" for all vvol datastores
* Run "esxcli storage vvol protocolendpoint list" and make sure a device is listed as "eui.something" and shows "Accessible: true" and "Configured: true".
* Note the eui ID of the device and run this command "esxcfg-mpath -b -d eui.something" => note how many paths you are seeing.  If you have NCM installed, you should see 2 paths per vmknic.
* If you are able to ssh to your nimble (user: admin), we can try a few more:

* vm --list: this will show you all your vvol VMs.  Pick one that has this problem with multiple connections
* vm --info <vvol-vm-name>: this will show all volumes for this VM.  Pick the volume name that ends with VMDK
* vol --info <vvol-vmdk>: Check "Access Control List:" Only one "Initiator Group:" should be listed in this section.  Check "Connected Initiators:" and make IQNs of the same host show up here.

HPE Nimble Storage