- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- MSA 1040/2040 iSCSI Direct Connect with VMware vSp...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-18-2016 02:16 PM
тАО11-18-2016 02:16 PM
MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
Hello,
I'm hoping someone can shed some light on this little annoyance.
SPOCK has a note with for MSA 1040 & 2040 iSCSI configurations with vSphere that "Direct Connect is not supported." Why is this? It looks to be supported for all the other OS options - why not vSphere?
Here are a couple of examples:
MSA 1040 10Gb iSCSI VMware vSphere 6.0 (ESXi 6.0) x64
iSCSI Initiator Notes
1) All standard ProLiant NICs are supported in conjunction with the OS iSCSI Initiator. Direct Connect is not supported.
MSA 1040 10Gb iSCSI Microsoft Windows Server 2012 x64 Hyper-V
iSCSI Initiator Notes
1) All standard ProLiant 10G NICs are supported in either Direct Connected or Switch Connected in conjunction with the OS iSCSI Initiator.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2018 02:25 AM - edited тАО02-13-2018 11:42 AM
тАО01-16-2018 02:25 AM - edited тАО02-13-2018 11:42 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
I realise this is an old post but I have been looking at this myself recently and found this https://communities.vmware.com/thread/536168 which explains the restriction - Vmware have not provided evidence for it apparently.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2018 06:04 AM
тАО01-16-2018 06:04 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2018 04:20 AM
тАО01-18-2018 04:20 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
I have found one important response from HPE product specialist in Vmware forum as below,
https://communities.vmware.com/thread/536168
Apart from the above you can refer the below links as well,
https://www.vmware.com/pdf/vsphere4/r41/vsp_41_iscsi_san_cfg.pdf
I work for HPE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2018 08:16 AM
тАО01-18-2018 08:16 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
Thanks @SUBHAJIT KHANBARMAN_1. I don't know why @Kerry3's VMTN link doesn't work. It appears to be the same URL.
I went ahead and replied there, and I'll repost part of my reply here:
I'd like to point out that other storage vendors do indeed support a direct-attached iSCSI configuration, e.g., Dell EMC Unity arrays. See their document Configuring Hosts to Access VMware Datastores. In Chapter 3, entitled Setting up a host to use Unity VMware VMFS iSCSI datastores, there's a note that says, "Directly attaching an ESX host to a Unity system is supported."
So, wouldn't it make sense for HPE to test this configuration and change their policy to support direct-attached iSCSI with MSA? That is, unless there actually is a technical reason not to do so.
Consider these points:
- People report that they do successfully use this configuration with MSA
- VMware merely "omit the documented evidence" - they don't specifically prohibit it
- The storage vendor has the last word on supportability
- By not supporting it, HPE puts their storage solutions at a competitive disadvantage
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2018 09:38 AM
тАО01-18-2018 09:38 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
Hello again
I do actually use this setup- direct connection to an HP MSA and it does work. However, I have found that I do not get failover to the other controller if I simulate a nic failure at the server (ie unplug the cable) - it looks like the connections are made only to the owning controller and do not failover to the other one. I had thought that ALUA might mean that this would work (VMware shows that it is aware that the SAN supports ALUA) but in my case it does not do failover. I have only 1 vdisk but several LUNs. Of course,. it may that I misunderstand the concept. However, the moment I only have failover if the whole controller goes (well I assume that would work -I have not tested it!). I am waiting for HP to tell me, in principal if it should work though expect to be told that as it is not supported blah blah. I can see how having 2 switches and 2 VLANs as suggested by Benjamin is the 'best practise', but I was also told by an HP support engineer that the reason that the MSA has so many ports is to allow its use without the need for a switch. A single switch seems like an even bigger point of failure than a NIC failure in a server (which is where my current failover 'fails'). The other post also suggests that the failover to the other controller should work, All I can say is - it doesn't for me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2018 09:41 AM
тАО01-18-2018 09:41 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
Oh, and BTW - apologies for the link - I just checked and there was a space at the end!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-18-2018 10:51 AM
тАО01-18-2018 10:51 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
So when you unplug that cable, do you lose access to the storage entirely? How many paths does the host see for each LUN?
I would expect failover to work, because HPE does support direct-attached iSCSI with other OSes, just not VMware.
Often ALUA arrays like this will failover controller ownership of the volume, but only after a certain threshold of I/O is received through the non-optimized path.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-20-2018 04:19 PM - edited тАО01-20-2018 04:19 PM
тАО01-20-2018 04:19 PM - edited тАО01-20-2018 04:19 PM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
@Kerry3: I just checked and there was a space at the end!
You can edit it using Post Options > Edit Reply and click on the hyperlink menu (chain).
And you can delete your followup post.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-24-2018 07:05 AM - edited тАО01-24-2018 07:09 AM
тАО01-24-2018 07:05 AM - edited тАО01-24-2018 07:09 AM
Re: MSA 1040/2040 iSCSI Direct Connect with VMware vSphere
Hi
Yes - that ESXI server loses access to the storage. Just to be clear, I only have one vdisk (so in the MSA there is only one 'ownership'. I then have 5 LUNs on that vdisk. So, can one LUN be changed over independently and also, bear in mind that other 2 esxi servers are the same LUN accessing on the other controller (though they 'should' be able to go to the other controller as they also have access via 2 direct connections (one to each controller))
I can see both the active and active(I/O) paths for all the LUNs in vmware so as far as I can there are 2 paths to all LUNs
I hope that makes sense
Kerry
PS - Have edited link in earlier post as suggested