Disk Enclosures
1752375 Members
6099 Online
108787 Solutions
New Discussion юеВ

IBM SAN connected to HPUX

 
Andrew Rowland
New Member

IBM SAN connected to HPUX

Hi,

We have an IBM San connected to our HPUX system. The I/O performance is around twice as slow for the SAN based disks as for internal HP disk arrays. Is this kind or ratio normal? Also would it be normal practice to adjust queue depths via scsictl for the devices pointing to the SAN? As the SAN has many spindles I am wondering if the default setting of 8 for the disk queue length is not enough to give the SAN a proper workout?


Many Thanks.
10 REPLIES 10
Eugeny Brychkov
Honored Contributor

Re: IBM SAN connected to HPUX

What are the disks? What are the switches? Does hpux server patched well (at least Sep 2002) and has the latest FC drivers from http://software.hp.com ? Attach hpux's 'ioscan -fn' here. Did you check /var/adm/syslog/syslog.log file for FC events?
Did you check IBM SAN disks are supported with hpux OS?
Eugeny
Andrew Rowland
New Member

Re: IBM SAN connected to HPUX

The disks in the internal array and the SAN disks are comparable in terms of size/specification and are both Raid-5. The Fibre is 1Gb and the switches/SAN ARE compatible with HPUX. The SAN is an IBM FastT200.

I am an Oracle DBA on this site and not the SysAdmin and so I do not have root access to easily run ioscan or to check syslogs / patch levels etc.

My question was targeted as to whether it is normal and desireable for someone to have to change disk queue settings via SCSICTL when attaching a third party SAN/Array? I/O performance is perceived as slow here but everyone who has looked at the stats for the SAN and switches has said that these components don't seem to be running at 100%.

Any thoughts?
Eugeny Brychkov
Honored Contributor

Re: IBM SAN connected to HPUX

It is normal and desirable, but you need to consult disk array documentation on which values to use or try playing by youself to find the best one.
Use sar -d command to see average req service time and outstanding requetsts (OS values).
In addition, if you're using 'dd' or other OS built-in commands to measure perfomance they will not show you real perfomance level. They are single threaded and will not create enough I/O concurrency.
I do not believe tuning q-depths will increase perfomance twice. That's why I asked all these things in my previous reply.
BTW, I'm not sure you will be able to run scsictl not being an administrator
Eugeny
Andrew Rowland
New Member

Re: IBM SAN connected to HPUX

Hi Eugeny,

Thanks for you answer. My assertion that the SAN is twice as slow as the internal drives is gathered from Oracle's statistics concerning the service times of reads against my Oracle database files. I recognise that this is not a scientific I/O benchmark but when the average response over millions of reads is twice as slow on one set of hardware as the other it seems fair to have some worries.

I suppose that the SAN will always have some latency compared with an internal array and at some point you would expect the switches, the fibre or the SAN's controllers to represent a bottleneck. For average overall performance to be twice as slow is dissappointing for us. A large proportion of our total capacity is on the SAN and to find it has such a large performance penalty is sad :-(

If you feel that any scsictl tuning of disk queue lengths is unlikely to account for a big performance boost, could you think of any other directions that we could explore?

Andrew.
Eugeny Brychkov
Honored Contributor

Re: IBM SAN connected to HPUX

Attach to your next reply
ioscan -fn
swlist -l bundle
swlist -l product
swlist -l fileset -a state
fcmsutil /dev/tdX .... for all HBAs
fcmsutil /dev/tdX devstat all
Eugeny
Keith C. Patterson
Frequent Advisor

Re: IBM SAN connected to HPUX

Hi Andrew,
This is a problem that has no simple answer. To start, have a look at this link:
http://docs.hp.com/cgi-
in/fsearch/framedisplay?top=/hpux/onlinedocs/5187-2255/5187-2255_top.html&con=/hpux/onlinedocs/5187-2255/00/00/97-con.html&toc=/hpux/onlinedocs/5187-2255/00/00/97-toc.html&searchterms=scsictl%7ckmtune&queryid=20030509-181637

This assumes you are using HP-UX 11i. I would recommend using kmtune to adjust queue depth if you are using 11i. The reason is explained in the manual that I provided. It has to do with the fact that scsictl goes back to the default setting of 8 after device opens and closes.
You really need to look at the manufacturer's recommendations on queue depth sttings for their array's.
For example I work with HDS 9900's a lot and there is a formula we use to calculate queue depth. It is 256/Number of LUNS configured on the port that you are presenting to the host. This number should never exceed 32. * is generally a good number but if you have a large number of LUNS on the port that may be too high and you will see significant performance degradation.
Good luck.
Andrew Rowland
New Member

Re: IBM SAN connected to HPUX

Thanks for the advice concerning whether to use vmtune or scsictl. I suppose vmtune has the advantage of surviving a reboot while scsictl the advantage that you can set different depths on different LUNs. As an example you presumably could have a bigger queue depth on say a Production LUN and a smaller depth on a Development one.

I think that in total we have something like 5 LUNS pointing to the SAN. 2 on our Production host (1 via each HBA/Switch) and 3 more from various development machines. There is around a TB of storage and obviously many disk spindles.

Without getting too obsessed with queue depths the mystery remains why throughput to this storage should be twice as slow as the internal arrays. The general concensus amongst the hardware experts and sysadmins is that the SAN is just slow and that's it. I was just hoping that someone might have been through the same loop of attaching an IBM FastT SAN to HPUX and solved the performance problems for us!

I'm sorry not to be able to give Eugeny the results to his commands. The politics at this site prevent me (as DBA) from easily being able to provide these answers.
Vincent Fleming
Honored Contributor

Re: IBM SAN connected to HPUX

Having only 2 LUNs on the SAN for a database is not good in HP-UX. If you put 3-4 LUNs in each volume group, and have volume groups for logs, indices, and dataspaces (3 volume groups, each with 3-4 LUNs, minimum), I'm sure you will see a great increase in performance.

HP-UX seems to have a nearly linear increase in I/O performance for each LUN you add, up to about 4 LUNs in a volume group. After the 3rd or 4th LUN, it seems to level off. This seems to be mostly due to the added concurency that the additional LUNs provide.

I'm not exactly sure why this is, but it seems to be related to how HP-UX handles disk queues - increasing your queue depth does not seem to help as much as having more LUNs.

So, try using more LUNs.

Good luck,

Vince
No matter where you go, there you are.
jfike252
New Member

Re: IBM SAN connected to HPUX

You can check the following
1. is the DS4300 fully working
2. Check the compatability of the HW

https://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

3. Check the all the connections between the host and storage.

4. here are some resources

IBM System Storage DS4000 and Storage Manager V10.30
http://www.redbooks.ibm.com/abstracts/sg247010.html?Open

IBM Midrange System Storage Implementation and Best Practices Guide
http://www.redbooks.ibm.com/abstracts/sg246363.html?Open