Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

FC60 attached to L3000

Wayne Green
Frequent Advisor

FC60 attached to L3000

Has anyone experienced any performance problems with an FC60.

We are moving from a K370 with 2FC10 disk modules to 2 L3000s connected to an FC60 with 6 SC10 attached. 50 36GB disks total.
5 6disk RAID5 LUNS are configured on the FC60.

We have to do a 5-6GB copy and a 19-20GB copy from LUN to LUN for a backup and data replication. Running the copy through the FC60 the copy speed is OK but other access is poor. bdf response from 1 to 20 mins. User login, checking quotas gets same response. Oracle startup is upto 10 mins.

K370 uses mirrored LVs on the FC10s, copy is slightly slower but response is 3-5 secs.
I'll have a beer, thanks
6 REPLIES
Alexander M. Ermes
Honored Contributor

Re: FC60 attached to L3000

Hi there.
How did you attach the disk compartments in detail ? What is SCSI, how many interfaces to the different storage devices ?
AN FC60 could be attached with two FC interfaces in the computer ( 100 MB/sec peak performance). The SC10 interfaces are Ultra SCSI2 ( 40MB /sec ) internal in the FC10.
If you attached the FC60 with only one interface ( according to teh max number of io-slots in a L3000 ), that is not enough. Pls let us have more details about your config.
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"
Wayne Green
Frequent Advisor

Re: FC60 attached to L3000

OK, that was quick.

The 2 L3000s are setup as a two node ServiceGuard cluster. So each machine has 2 PCI FC cards each connected to separate FC hubs. The FC60 has 2 controllers with 512 MB cache connected to separate hubs. The FC60 has 6 SCSI LVD interfaces each connected to a separate SC10 module.
Each 6 disk RAID5 LUN has one disk in each SC10 module. The LUNS are configured with 16MB segment size.

The throughput isn't the problem although the faster the better, its when there's a large amount of I/O. In this case the GB copies trying to get a response from the FC60 is poor. The FC10 modules are attached to the hubs. Using a couple of those disks for the copy the bdf response is down to 3-5secs. So it seems the JBODs are better than an all singing all raid level disk controller.

The L3000s have 4GB memory, running HPUX11 64bit.
I'll have a beer, thanks
Alexander M. Ermes
Honored Contributor

Re: FC60 attached to L3000

Hi Wayne.
It might be, that the hubs are the weak point.
What we have done here is a cross connection betwwen FC and computers. So one channel of each FC is connected directly to a computer.
FC-A line 1 to computer a
FC-A line 2 to computer b
FC-B line 1 to computer a
FC-B line 2 to computer b
. Perhaps that is an idea for you.
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"
Mark Mitchell
Trusted Contributor

Re: FC60 attached to L3000

Another point is that the FC60 has 2 controlers, but some are setup so that 1 is active and the 2nd is passive. You might want to do a vgdisplay -v to see which LUN addresses the controler is looking at. Then spreading the output onto the 2nd controller might be an option. I have a Clarion 60 which is almost the same unit that operates in this way. Useing LVM, I was able to spread the IO better across the LUN's.
Wayne Green
Frequent Advisor

Re: FC60 attached to L3000

Mark,
Had a look at which controllers are setup for which LUN. Changed them around with pvchange but it didn't seem to make much difference if both LUNs were using the same controller or they were using different ones.
Alexander,
Would be worth trying this but the FC10s are to be re-used so we need the FC hubs to connect all the kit up.

One test we were able to make was with a D370. I imported a couple of volume groups using FC60 LUNs onto the D370 and ran exactly the same copy. This was slower but the response time from the FC60 was fine.

HPRC ran cp with -S switch which apparently does not use buffer cache. Response from FC60 was immediate but the copy time increased from 25mins to over 2hours.

HP are now heavily involved so if they resolve it I'll post the fix. In the meantime thanks for your effort.
I'll have a beer, thanks
Wayne Green
Frequent Advisor

Re: FC60 attached to L3000

HP solution so far seems to be to spread the I/O load over as many spindles as possible and to reduce the buffer cache being used by tweaking dbc_min_pct and dbc_max_pct to the same percentage.

Because of disk space limitations we are still using RAID5 LUNs. We've created lvols using the -D switch to extent stripe across 2 and 3 LUNS. The copy time to copy 19GB of data to this lvol as opposed to copying to a lvol with just 1 LUN halved. 20 mins.
The bdf/sync response is 45secs but other response times seem OK.

Still doesn't address why the sync takes so long on an L-Class as opposed to a D-Class. I've reduced the buffer cache on the L-Class server to be less than that used on the D-Class but the response is still 4 times worse.
I'll have a beer, thanks