1748028 Members
4943 Online
108757 Solutions
New Discussion

Re: 10Gb throughput

 
gange
Occasional Advisor

10Gb throughput

Hi all,

i can't find any reference values from the field on the web.

How fast is your 10Gb ISCSI pipe? (IOMETER)

 

Thanks!

12 REPLIES 12
oikjn
Honored Contributor

Re: 10Gb throughput

loaded question.

 

you haven't found anything because it makes little to no use.  This would be the same as if 10 years ago you asked what people were getting with 1Gb when 100Mb was the typical standard.

 

Your answer is, you can get up to line speed throughput assuming you have the hardware to back it up.  There is no software limitation holding you back its simply the interfaces and hardware limitations which pose current day limitations on actually reaching 10Gb sustained throughput.

 

For most people putting in a 10Gb LAN simply means that the LAN no longer becomes the speed bottleneck and it moves it down the line to something else.  It could be the disk bandwidth, the CPU calculation rate, the PCI bandwidth, you name it.

 

Also, "fast" is a vauge term... some people think of throughput for speed, others think in IOPS, still others think of their own combenation of the two.

gange
Occasional Advisor

Re: 10Gb throughput

 

 

i have a

OS: Windows Server 2012 

2x dl380g8p with a 10Gb dual port broadcom  530FLR nic - latest driver. 09.2013

two procurve 5406 with 10 gig modules

and a 2 node 4730 cluster with 10 gig 

 

 with crystaldiscmark i have 370MB/s seq read and 306MB/s write performance  on the SAN 

 

i need some troubleshooting suggestions, please.  

Dirk Trilsbeek
Valued Contributor

Re: 10Gb throughput

where exactly do you need troubleshooting? Speed seems to be okay...

gange
Occasional Advisor

Re: 10Gb throughput

is it? this is my first contact with 10gig... 

i dont understand this performance values. 

 

with crystaldiskmark  running @300.. MB/s my 10gig ISCSI NIC on the server shows me only 4,2Gb/s workload. 

we have all the way 10gig,  we have lots of spindles, no workload.  

4Gb/s is right for 3xxMB/s but this is a performance like a 4x 1Gb NIC Team. 

We have 10gig, so there are 50% more.. :-)

 

i hope for this setup 700 - 800 MB are possible. 

 

on the Windows Server DAS (2x300GB SAS 15k Raid1)  i see 3000MB/s seq read. 

 

a 20GB filecopy via SMB from DAS -> SAN writes with 130 MB.

the same file copy on the  DAS itself writes with 2.1GB/s

 

gange 

 

 

 

 

 

 

 

 

oikjn
Honored Contributor

Re: 10Gb throughput

well, SOMETHING is your bottleneck but its up to you to figure out.

 

at 10Gb, the PCI slot can be a bottleneck!  PCIex4 links can only handle ~800MB/s and that is what many slots are these days even if they physically look like 16x slots.

 

There is no way two HDDs ran at 2.1GB....  just not possible considering in THEORY (not practice) the max badnwidth of a 6Gb SAS/SATA link is 600MB/s so the best you could hope for would be 1200MB/s and even that isn't going to happen.  So your 2.1GB number is w/ the OS doing some background caching.

 

4.2Gb is certnaly good and if you really want more you are just going to have to monitor your system and figure out what is holding up the performance, is it disks, memory, CPU, I/O, controller, cache, PCI slot...  the list goes on.

gange
Occasional Advisor

Re: 10Gb throughput

all components are brand new, designed by hp for 10gig, so i dont beleve we have a hardware bottleneck. 

 

i found this doc from M$ 

 

10gig iscsi ms  and this one ISCSI MPIO W2k12

 

 

i think MPIO is the POI 

 

i see only one active iscsi connection during my testdrive. 

MP is curently disabled. 

 

with one connection i become 300MB/s so with MP enabled 

it will be x4? 

 

 

oikjn
Honored Contributor

Re: 10Gb throughput

I don't see anything in your listed setup that shows it was designed for 10Gb throughput only that you have been sold 10Gb network ports.  If you say it was designed for 10Gb throughput, go back to the designer and get them to deliver the performance that was designed.

 

I have a feeling there was no "design" and if you want to push out your maximum throughput, you will just have to monitor your components and figure out where your bottleneck is.  It could be that your hardware is maxed out the packet rate for a single connection at our frame/cluster size and you have to impliment MPIO to drive the bandwidth higher.

 

Keep in mind that for the SanIQ design, you also have traffic moving between the two devices on the same network, so if you write 10Gb to the san, it has to replicate at 10Gb which means you really need 20Gb for each SAN node.  If you only have one 10Gb link on each node (an A/P NIC configuration), then your SAN could only deliver 5Gb to the servers since it would need the other 5Gb to replicate with the other nodes.

gange
Occasional Advisor

Re: 10Gb throughput

Oikjn, thankyou for your input. 

 

Keep in mind that for the SanIQ design, you also have traffic moving between the two devices on the same network, so if you write 10Gb to the san, it has to replicate at 10Gb which means you really need 20Gb for each SAN node

 

i didnt describe before...  there is a 20Gb 802.3ad trunk between the 2 storage nodes.

 

 

It could be that your hardware is maxed out the packet rate for a single connection at our frame/cluster size and you have to impliment MPIO to drive the bandwidth higher.

 

What is the frame/cluster size, and where can i check this? 

oikjn
Honored Contributor

Re: 10Gb throughput

I'm not going to give you ALL the things to check because its too long a list and that is why there are storage professionals...

 

that said, frame size would be the maximum packet size your network can handle and also checking to make sure everything is setup to run at the same MTU.  In windows you can check by sending a ping packet w/ increasingly large sizes to see what your max is...  for 10Gb you want to have jumbo frames enabled.  Type "ping X.X.X.X -l ###" and keep increasing ### until the ping stops.

 

cluster size would be the formatted size of your Disk...  if you have it set at 512B sectors will handle different than 64KB sectors depending on your data and access profiles.

 

All that said, if you read the netapp document you linked you would see that the numbers they see for performance were pretty much in line (if not slightly worse) than what you showed.