1753481 Members
4263 Online
108794 Solutions
New Discussion

iscsi vs FC performance

 
SOLVED
Go to solution
cac3a
Occasional Advisor

iscsi vs FC performance

I have recently been playing around with iscsi library to get some comparison performance numbers and I'm little surprised how much slower it is from FC. I have a dedicated lab setup where no other workloads come to play and in my tests FC ends being 50% faster than iscsi.  In both cases have NVME storage behind which in other conditions (general linux) has been giving me 550MB/s on read/write under stress, but on hpux I'm tapping out at 50MB/s on iscsi. I use poor men's test commands such as dd or prealloc to compare results between different configs. My goal isn't to really get metrics on the storage appliance, but rather comparison on metrics between FC and Iscsi under same/similar conditions. 

 

What I'm seeing right now is the either dd or prealloc will go sleep state for 70-80% of the execution ( I guess ) waiting on IO when running through iscsi, but there is plenty of rooom to push through, which is puzzling to me. I've toyed around with kctune params and vxtunefs params, but the needle has only moved a little bit.

 

Has anyone else run into this problem where iscsi performance is subpar?

13 REPLIES 13
support_s
System Recommended

Query: iscsi vs FC performance

System recommended content:

1. HPE 3PAR StoreServ iSCSI Best Practices Configuring iSCSI with DCB

2. iSCSI Cookbook for Virtual Connect

 

If the above information is helpful, then please click on "Thumbs Up/Kudo" icon.

 

Thank you for being a HPE community member.


Accept or Kudo

cac3a
Occasional Advisor

Re: Query: iscsi vs FC performance

Is there a documentation that you know how to set up initiators on HPUX OS for the emulex cards? The docs only go through esxi and windows. 

I was able to add the target during boot and discover them. But LUNs aren't showing once the system boots. I can't find any referrences on what to do besides the setup in prior to boot.

Bill Hassell
Honored Contributor

Re: iscsi vs FC performance

Fibre will be always be faster than iSCSI. Rather than a shared resource with protocol overhead, fibre is runs with very low overhead in the server, whereas iSCSI runs through a network stack along with all other network activities. Even with a dedicated network, the TCP encapsulation around iSCSI is significant. And CPU loading is much lower for fibre than TCP.

Some advantages for iSCSI: distance. Fibre is limited to typical SAN connections, where TCP is not distance limited with routers and WANs.

Also, iSCSI shares already existing networking, albeit upgraded to at least gigabit, possibly 10 Gbit. Interconnects are easier since switches and routers are likely in place. But iSCSI still shares the party line nature of TCP.

The wait times for data seem to indicate a driver or handshake issue. Use Glance to look at where the time delay is occurring. You will definitely need the latest patches for iSCSI, possibly overall networking too.



Bill Hassell, sysadmin

Re: iscsi vs FC performance

To add to what has already been stated...

The iSCSI stack on Linux/Windows/ESXi has continued to be developed over the last 15 years, including code to take advantage of offload into hardware functions in NICs. There is quite a prevalence of iSCSI in the x86 world which was always more price sensitive than commercial UNIX world. It's also often used as the mechanism fro delivering IO in HCI stacks as well.

The HP-UX iSCSI software initiator is almost completely untouched apart from bug fixes since it was originally released (some time around 2005 I think). It was to my knowledge never recommended by HP/HPE as a serious solution for delivering block IO performance on HP-UX. So for example I would be very surprised if anyone in HP ever reviewed the code/performance of the HP-UX iSCSI software initiator when 10GbE became more ubiquitous. I have only ever come across one customer who used it, and they moved off it in about 2012

TL:DR - if you want block IO performance on HP-UX, use a FC stack - that's what 99% of other HP-UX customers do. I bet it's what 99% of AIX and Solaris customers do to.


I am an HPE Employee
Accept or Kudo
cac3a
Occasional Advisor

Re: iscsi vs FC performance

Thank you for recommendation on going FC. Based on test results in one of docs shared in first response, it seems they were able to achieve comparable (in some cases even faster) performance. I'd like to try to get it to work with hardware acceleration as I think that is what was used in the tests. I'm having trouble getting that to work.

What I have done so far is:

  1. create a profile in VCM with iscsi uplink
  2. During boot there is a menu to configure and discover iscsi target, which is done as well (targets are discovered)
  3. Not sure here if I need a driver or what, but luns from iscsi aren't showing in the system after OS is loaded.

Point #3 is where I'm stuck at this point. Have you been able to get that to work ?

Re: iscsi vs FC performance

>> During boot there is a menu to configure and discover iscsi target, which is done as well (targets are discovered)

Wait are you doing this in EFI on an Integrity BL8x0c blade? I'm sure you can do it on an x86 blade, but I never heard of anyone doing it on a HP-UX blade.... sort of suggests you are trying to boot off iSCSI? That certainly isn't supported in HP-UX - see this guide:

https://support.hpe.com/hpesc/public/docDisplay?docId=c02037108

And specifically the bottom of p38 where it is stated:

The resolution of the ordering problems described above has placed limitations on the iSCSI SWD. Because network initialization is performed using the /var directory, the /var directory cannot be on an iSCSI target. Also,the boot, root, primaryswap, and dumpfile systems are not supported on iSCSI volumes.

If you've really done this in EFI on an Integrity Blade, I can only assume it was there to support iSCSI boot for Windows/Linux when there were IA64 versions of those Operating Systems.

 


I am an HPE Employee
Accept or Kudo
cac3a
Occasional Advisor

Re: iscsi vs FC performance

I wasn’t trying to boot from iscsi, just trying to get the LUNS to show up in the os. The boot is happening from local drives.
Yes this is on 8x0c blades.

What im trying to get working is iscsi hardware acceleration through virtual connect manager.
My understanding was that by offloading the iscsi connectivity to vcm one would see the targets show up in the OS. Are you saying thats not possible while booting from local disk?

Re: iscsi vs FC performance

Where did you get the idea you could do that?

Are you thinking that as you can take a CNA in your blade and through configuration turn it into a FC HBA, then you could also turn a CNA into an iSCSI accelerated connection? Well you can't - CNAs can either be configured as FC ports or ethernet ports. If you want to use iSCSI, you define your CNA as an ethernet NIC and it will show up in the HP-UX as an ethernet NIC. Then you need to stick an IP stack on it in the usual way and only then can you start using the HP-UX iSCSI software initiator. Instructions for which are in the manual I posted in my previous response.

As I said previously, the iSCSI stack in HP-UX is software only. There is no hardware accelration beyond that offered by a standard Ethernet NIC.


I am an HPE Employee
Accept or Kudo
cac3a
Occasional Advisor

Re: iscsi vs FC performance

I've read that in the iSCSI Cookbook for Virtual Connect., but I maybe misunderstand something. They talked about accelerated iscsi and accelerated iscsi boot- I'm interested in the first one. 

 

The hardware acceleration that I was talking about is just on the nic side. Take a look at my vcm config below, I have the boot disabled. My question is how would one really leverage this config in the OS? 

 iSCSI Cookbook for Virtual Connect., - this doc also only shows windows and esxi configuration.

 

2021-11-30 11_52_30-HPE Virtual Connect Manager.png