HPE 3PAR StoreServ Storage
1748194 Members
4327 Online
108759 Solutions
New Discussion

Re: Multipathing (MPIO) with NFS File Personas

 
LightSpeedHost
Advisor

Multipathing (MPIO) with NFS File Personas

I have a 3PAR 8450 unit and I've had no issues setting up iSCSI targets with MPIO but after talking with support for one of our vendors, they've heavily suggested we utilize NFS instead of iSCSI.

I have no issues with iSCSI, or NFS, frankly both on an all Flash array are fast enough for our needs but  I haven't found any documentation that indicates how we can use multipathing with NFS File personas. In fact not only that but for some reason, I can't use the 10G links in either controller as nodes in the Fileserver either so I've avoided doing anything with NFS on this unit but I've reached a point in which I have no choice.

I can't use LACP which is what we've used for a long time across the board, so I have to imagine that 3PAR engineers have some method of allowing multipathing on NFS Personas

 

Thanks!

6 REPLIES 6
Sheldon Smith
HPE Pro

Re: Multipathing (MPIO) with NFS File Personas


@LightSpeedHost wrote:

... I can't use the 10G links in either controller as nodes in the Fileserver either ...


There are several types of Host Adapters listed in the HPE 3PAR StoreServ 8000 QuickSpecs: FC, iSCSI, Ethernet and Combo adapters.
iSCSI ports will not work for File Persona.
You would need either a pair of the 10Gb Ethernet adapters, or a pair of Combo adapters with 10Gb NICs.

See the 3PAR 8000 QuickSpecs, "Step 2 - Choose Host Adapter" for the table of permitted Adapter Configurations.


Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

Accept or Kudo

LightSpeedHost
Advisor

Re: Multipathing (MPIO) with NFS File Personas

Sheldon, it's like we're one mind, sir.  I noticed when I was in there that they were iSCSI/FCoE links.  I just assumed they were ethernet because I was able to get them linked up and I could ping out from them.  I already ordered 4 new cards about an hour today should arrive Sunday/Monday and I'll give those a shot I guess.

The second part of that question still applies though I guess.  If I want to do MPIO but use NFS file personas, is this technically possible on the 8450?

Re: Multipathing (MPIO) with NFS File Personas

>The second part of that question still applies though I guess.  If I want to do MPIO but use NFS file personas, is this >technically possible on the 8450?

Can you share your planned setup, please ? I am not sure if you can compare iSCSI with NFS. Is your question mainly related to load-balancing, network-combo-card-usage, or high-availability ? What "host" type or "NFS-client" type do you plan to use?

iSCSI is a block-device protocol. If you have a host that you attach the 3PAR-iSCSI device to, then you will need to create a filesystem on it. 

3PAR FilePersona with NFS is a NAS solution that can be accessed via NFS and SMB. The filesystem is created for you inside FilePersona.

 

Some facts about NFS on 3PAR-filepersona:

FilePersona on a 8450 runs on the 10 gig interfaces and supports Linux bond-mode 1 and 6.

Bond1 means "Active Backup" 

one NIC active while another NIC is asleep. If the active NIC goes down, another NIC becomes active. only supported in x86 environments.

Bond6 means "Adaptive Load Balancing".

Load-Balancing is done on ARP negotiating, which means that there is no load-balancing if you have only one single NFS-client connected, as that client will have only one MAC address. Traffic from multiple NFS-clients will automatically be distributed.

3par-FilePersona does NOT support pNFS (parallel NFS)

3par-FilePersona is not a cluster solution with concurrent access to multiple 3PAR nodes. You can have multiple so called FPGs (corresponds to a filesystem and fileshare(s)), which can be distributed among the nodes But a single fpg cannot be spread among different nodes, which means that if you have only one single fpg on a 2 node cluster, one of them would always be idle.. 

 

 

I am an HPE Employee

Accept or Kudo

LightSpeedHost
Advisor

Re: Multipathing (MPIO) with NFS File Personas

Thank you for that thorough response.

The plan is to have two independent 24 bay (dual controllers) 8450s.

I realize with two there’s things we can do too run them together but
that’s not the plan, they won’t be in a cluster.

Will have 16 Xen hosts pushing VM storage drives to these.

The nature of this stack easily allows for downtime if needed. So if an
actual massive failure were to happen in array 1, we would import backups
into array 2 and turn those VMs back on. The time to import and restore is
acceptable and simplifies the setup.

Each Xen host has dual 10G NICs setup bonded using active/active LACP which
is what unique to Cumulus vs Active/Backup LACP.

It was my understanding that iSCSI on these 3PAR units do not offer bonding
of 10G interfaces and this multi pathing was the only option to have links
from multiple leaf switches and give us required networking redundancy.

For NFS, does the bonding modes put the ports in LACP (802.3ad) or is it
some other thing entirely?

One other question, with some arrays (think Synology) there’s not a
significant difference between NFS or iSCSI performance but there is a
slight advantage for iSCSI. Can the same be said about NFS on these 3PAR
units or should we expect a large degrade in latency/IOPs etc vs iSCSI?

Thanks!
--
Joshua Holmes
LightSpeed Hosting
P: (888) 929-9639
E: joshua@lightspeedhosting.com

Re: Multipathing (MPIO) with NFS File Personas

>For NFS, does the bonding modes put the ports in LACP (802.3ad) or is it
>some other thing entirely?

LACP in Linux would be Bond-Mode 4. 

3PAR FilePersona supports Bond-Mode 1 and 6. Bond-Mode 4 is NOT supported.


>One other question, with some arrays (think Synology) there’s not a
>significant difference between NFS or iSCSI performance but there is a
>slight advantage for iSCSI. Can the same be said about NFS on these 3PAR
>units or should we expect a large degrade in latency/IOPs etc vs iSCSI?

I don´t know which one would be faster. Both are slow in comparisn to the setup the majority of customers use. 

Fibre-Channel, periodic-Remote-Copy  and RMC should be used for IO, redundancy and backup.

Also storing VM (e.g. vmdk) files on a FilePersona NFS share is not the recommended use case, although it probably would work. 

 

Hope that helps.

I am an HPE Employee

Accept or Kudo

LightSpeedHost
Advisor

Re: Multipathing (MPIO) with NFS File Personas

Thank you Bert.

Our switch supports bond mode 4 and bond mode 2, neither of which are native to the NFS on 3PAR.   However, after reading on Bond mode 6, it appears to me that with bond mode 6 the switch doesn't need to support anything and that the magic is done by the 3PAR array using ARP negotiation.  I have to admit, that's new to me so I'll have to figure out how that changes how we connect and utlize it.

I am a bit dissapointed that the 3PAR NFS or iSCSI setup isn't as performant as fiber channel.  We don't support any FCoE here and none of our hosts do either, so it's really not an option.  I wouldn't even be opposed to DAC using SAS connectors but we don't have that capability either.

I'll run some bench marks I guess against NFS and see if it will meet our requirements but with Citrix/Xen the two realistic options for VDI shares storage are NFS and iSCSI.

Thanks!