- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Virtual FC-HBAs (NPIV) for LINUX Virtual Machines...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-18-2010 08:39 AM
тАО06-18-2010 08:39 AM
But here's what I want to do. I heard these C7000 Enclosures and HP's Blade Systems offer Virtual Connect. It suposedly has the ability to do NPIV where the main FC Adapters of teh C7000 can be allocated virtually to each Blade Server. Like:
BladeA: will have VCONNET NPIV 11 12 13 14 (from VC1) 21 22 23 24 (from VC2).
on BladeA: I will have 4 LINUX Virtuals
VLinuxA: will get NPIV 11 and 21
VLinuxB: will get NPIV 12 and 22
...
VLinuxD: will get NPIV 14 and 24
.. so on.
So on my Storage Arrays, Allocations will still be the same as our Physicals since each Virtual Linux Machine will have their own HostGroups based on their own NPIV Virutal FC-HBA adapter.
I think this is better and faster than having the Linux Virtual Machines simply presented with VMDK disks.
But is this at all possible and supported? Are their drivers needed and provided by Vmware Tools? WIl this impact VMotion-ability of the LINUX Machines?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-21-2010 08:19 AM
тАО06-21-2010 08:19 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
Should I post in the Storage/SAN Fora?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-21-2010 11:02 AM
тАО06-21-2010 11:02 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
You have to set up RDM connections for a VM first. Then you can assign virtual WWNs to a VM. Then you must add these WWNs to a WWN-based FC switch zoning and the storage array's LUN masking (in addition to the already existing RDM-setup). Now you can power-on the VM and check the logfiles if the NPIV feature starts - if there is an error, the hardware FC driver shuts down the NPIV feature (well, it did, when I tested it in ESX 3.5...), but the VM has still access to the data through the RDM!!
Again, NO Fibre Channel adapter within the VM and no access to Fabric services (SNS, ...) from the VM.
In most cases, RDMs are not (much) 'faster' then an access via VMDK files. It seems to be a myth that is passed on and on. One of HP's groups, for example, tested it some time ago and confirmed that they are pretty equal in performance.
Now, virtual connect...
Think of it as a 'port aggregator'. It logs in as an end-node (using N_ports) to a Fibre Channel switch, so the VC-FC module itself uses NPIV when talking to an FC switch.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-10-2011 11:04 AM
тАО01-10-2011 11:04 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
"Now, virtual connect...
Think of it as a 'port aggregator'. It logs in as an end-node (using N_ports) to a Fibre Channel switch, so the VC-FC module itself uses NPIV when talking to an FC switch."
Yes I now understand. Each Physical Blade server is assigned a WWN and what the Switch sees on the port will be the Physical vCOnnect and the allocated server's NPIV'd WWN - right?
Well... we'renot in the process of deploying Linux guests to use RDM and using NPIV and it does not seem to work. Following http://www.brocade.com/forms/getFile?p=documents/white_papers/white_papers_partners/NPIV_ESX4_0_GA-TB-145-01.pdf -- weve assigned the VM its own WWID but the underlying ESX OS does not even see the vports -- hence the switches does not see the generated WWIDs...
DOes this mean we cannot NPIV Linux Guests on Blade Server's HBAs that are already NPIV'd? Or does the Brocade recipe above not apply to C7000/Blades?
We're planning to open a case with vMware and HP Support on this onw.
This isssue is similar to:
http://communities.vmware.com/message/1670575?tstart=0
http://communities.vmware.com/thread/263054
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-11-2011 09:59 AM
тАО01-11-2011 09:59 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
Pre-present a LUN or LUNs
NPIV WWID will thence be persistentely be visible on the SAN whenever VM is up.
And it does work on Blade Server's HBAs which are aready in itself NPIV'd out of the Flex10/vConnect modules. A kinda like NPIV over NPIV thingy.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 12:02 AM
тАО01-12-2011 12:02 AM
SolutionWhen the FC driver activates NPIV and does not find a LUN for the WWN it tears down NPIV again. You must do switch zoning and LUN masking in advance.
I haven't played with it in recent releases, but in the early days you only had the ESX logfiles for troubleshooting.
Even when NPIV did not come up, the VM had disk access through the traditional VMkernel RDM connection anyway. (How is that for redundancy ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 07:44 AM
тАО01-12-2011 07:44 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 07:55 AM
тАО01-12-2011 07:55 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
"In most cases, RDMs are not (much) 'faster' then an access via VMDK files. It seems to be a myth that is passed on and on. One of HP's groups, for example, tested it some time ago and confirmed that they are pretty equal in performance."
For Mamby Pamby scenarios - Yes.
But for I/O hungry Databases - it's a different story good sir. It is still all about isolation, striping, etc.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 07:59 AM
тАО01-12-2011 07:59 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
NPIV Management efficiency, I am not so sure...
I haven't tested the latest implementations, but how good is that additional work of zoning and presentation when the VM falls back to the 'traditional RDM' because a minor error caused NPIV to fail?
The only reason I can see is if you want to identify traffic from the VM through the SAN with the help of the NPIV WWPNs. But then, maybe I'm partially blind ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 08:00 AM
тАО01-12-2011 08:00 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2011 09:40 AM
тАО01-12-2011 09:40 AM
Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware
Without NPIV and RDM, provisioing to zillions of guests would be a nightmare.
With NPIV, you TREAT a Virtualized Host no different than a physical one. It has its own "zone", it's own hostgroup in whatever array one uses. I manage storage and have been sheperding storage management in my shoppe -- and I am very intimate with the subject of storage management...
It also allows for isolation of "traffic" to a particular Virtual Server by way od special HBAs on the physical hosts end.
WIth Linux KVM virtualisation - I do see the performance possibilites of using NPIV'd storage. With ESX - since RDMs are pointers to actual raw disks - I figure the performance advantages would be the same sir.