Operating System - Linux
1748000 Members
4564 Online
108757 Solutions
New Discussion юеВ

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

 
SOLVED
Go to solution
Alzhy
Honored Contributor

Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

The underlying infrastructure for our fledgling LINUX VM ecosystems under vSPhere virtualization is -- C7000 Enclosure-- using BL Series Blades (G6). A different group manages vMware so there is sometiems a disconnect.

But here's what I want to do. I heard these C7000 Enclosures and HP's Blade Systems offer Virtual Connect. It suposedly has the ability to do NPIV where the main FC Adapters of teh C7000 can be allocated virtually to each Blade Server. Like:

BladeA: will have VCONNET NPIV 11 12 13 14 (from VC1) 21 22 23 24 (from VC2).

on BladeA: I will have 4 LINUX Virtuals

VLinuxA: will get NPIV 11 and 21
VLinuxB: will get NPIV 12 and 22
...
VLinuxD: will get NPIV 14 and 24
.. so on.

So on my Storage Arrays, Allocations will still be the same as our Physicals since each Virtual Linux Machine will have their own HostGroups based on their own NPIV Virutal FC-HBA adapter.

I think this is better and faster than having the Linux Virtual Machines simply presented with VMDK disks.

But is this at all possible and supported? Are their drivers needed and provided by Vmware Tools? WIl this impact VMotion-ability of the LINUX Machines?

Hakuna Matata.
10 REPLIES 10
Alzhy
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Anyone?
Should I post in the Storage/SAN Fora?
Hakuna Matata.
Uwe Zessin
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

First, NPIV on ESX3.5/4 *does not* give a VM an emulated(virtual) Fibre Channel adapter!
You have to set up RDM connections for a VM first. Then you can assign virtual WWNs to a VM. Then you must add these WWNs to a WWN-based FC switch zoning and the storage array's LUN masking (in addition to the already existing RDM-setup). Now you can power-on the VM and check the logfiles if the NPIV feature starts - if there is an error, the hardware FC driver shuts down the NPIV feature (well, it did, when I tested it in ESX 3.5...), but the VM has still access to the data through the RDM!!

Again, NO Fibre Channel adapter within the VM and no access to Fabric services (SNS, ...) from the VM.

In most cases, RDMs are not (much) 'faster' then an access via VMDK files. It seems to be a myth that is passed on and on. One of HP's groups, for example, tested it some time ago and confirmed that they are pretty equal in performance.


Now, virtual connect...
Think of it as a 'port aggregator'. It logs in as an end-node (using N_ports) to a Fibre Channel switch, so the VC-FC module itself uses NPIV when talking to an FC switch.
.
Alzhy
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Uwe my man,

"Now, virtual connect...
Think of it as a 'port aggregator'. It logs in as an end-node (using N_ports) to a Fibre Channel switch, so the VC-FC module itself uses NPIV when talking to an FC switch."

Yes I now understand. Each Physical Blade server is assigned a WWN and what the Switch sees on the port will be the Physical vCOnnect and the allocated server's NPIV'd WWN - right?

Well... we'renot in the process of deploying Linux guests to use RDM and using NPIV and it does not seem to work. Following http://www.brocade.com/forms/getFile?p=documents/white_papers/white_papers_partners/NPIV_ESX4_0_GA-TB-145-01.pdf -- weve assigned the VM its own WWID but the underlying ESX OS does not even see the vports -- hence the switches does not see the generated WWIDs...

DOes this mean we cannot NPIV Linux Guests on Blade Server's HBAs that are already NPIV'd? Or does the Brocade recipe above not apply to C7000/Blades?

We're planning to open a case with vMware and HP Support on this onw.


This isssue is similar to:

http://communities.vmware.com/message/1670575?tstart=0
http://communities.vmware.com/thread/263054


Hakuna Matata.
Alzhy
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Pre-zone.
Pre-present a LUN or LUNs
NPIV WWID will thence be persistentely be visible on the SAN whenever VM is up.

And it does work on Blade Server's HBAs which are aready in itself NPIV'd out of the Flex10/vConnect modules. A kinda like NPIV over NPIV thingy.
Hakuna Matata.
Uwe Zessin
Honored Contributor
Solution

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Yes, day-one limitation.
When the FC driver activates NPIV and does not find a LUN for the WWN it tears down NPIV again. You must do switch zoning and LUN masking in advance.

I haven't played with it in recent releases, but in the early days you only had the ESX logfiles for troubleshooting.

Even when NPIV did not come up, the VM had disk access through the traditional VMkernel RDM connection anyway. (How is that for redundancy ;-)
.
Alzhy
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

I hear ya Great Uwe but NPIV brings SANITY to storage management. Dunno about performance -- likely perhaps but what I am after IS storage management efficiency.

Hakuna Matata.
Alzhy
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

@Uwe:

"In most cases, RDMs are not (much) 'faster' then an access via VMDK files. It seems to be a myth that is passed on and on. One of HP's groups, for example, tested it some time ago and confirmed that they are pretty equal in performance."

For Mamby Pamby scenarios - Yes.

But for I/O hungry Databases - it's a different story good sir. It is still all about isolation, striping, etc.
Hakuna Matata.
Uwe Zessin
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Some people claim RDMs are much faster than VMDKs, but I've seen reports from HP and VMware that there is very very little difference.

NPIV Management efficiency, I am not so sure...
I haven't tested the latest implementations, but how good is that additional work of zoning and presentation when the VM falls back to the 'traditional RDM' because a minor error caused NPIV to fail?

The only reason I can see is if you want to identify traffic from the VM through the SAN with the help of the NPIV WWPNs. But then, maybe I'm partially blind ;-)
.
Uwe Zessin
Honored Contributor

Re: Virtual FC-HBAs (NPIV) for LINUX Virtual Machines under vSphere/vMware

Just for info: I haven't seen your response from 15:55:30 when I wrote my previous text.
.