- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Disk Storage Performance Question
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 01:43 AM
01-15-2002 01:43 AM
Disk Storage Performance Question
When I perform a "dd if=filexx of=/dev/null bs=1024k", I can achieve about 40MB/s. But when I perform a "dd if=/dev/vgxx/lvolxx of=/dev/null bs=1024k count=100", I only managed to achieve 13MB/s. Is this performance on external fibre normal ? I believe for fibre connection, the max IO rate should be 100MB/s.
When I issue cp command, I can achieve about 40MB/s but when I run some SAS programs (very simply PROC PRINT command on some large SAS dataset, size about 2 GB), I can only achieve IO rate of 13MB/s. Are these 2 slow IO performance related ? IS there any thing that I can tune to make the SAS program run faster ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 01:48 AM
01-15-2002 01:48 AM
Re: Disk Storage Performance Question
what is the raid level and cache.. are you going through a FC-SCSI mux? or is it a fibre controller array like an FC60.
Some of the FC IO is taken up in an 8b/10b double parity frame. You get little retrans on FC, but not necessarily more speed and throughput than ultra scsi.
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 01:50 AM
01-15-2002 01:50 AM
Re: Disk Storage Performance Question
You did't mention about the External disk arry model. Yes filer can support upto 1 GB.
Regards
Harpreet Singh Chana
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 02:09 AM
01-15-2002 02:09 AM
Re: Disk Storage Performance Question
I am using StorageTek 9176 (with 2 Storage Processor, with 1 GB cache) with 1 x 9170 disk array (with 10 x 36 GB hdd). The 9176 comes with 4 x FC ports. The Host Bus Adapter on my HPUX connects to one of those FC ports.
I have configured the disk array with 1 x RAID 5 (of 9 disk), the other disk is configured for hot spare.
I agreed that the disk is probably the bottle neck. But the performance of 13MB/s is really way too low.
It seems that when I do a dd if=filexx, it is using the cache and if I do a dd if=/dev/vgxx/lvolxx, it does not uses the cache.
If I compare it with the case when I perform a cp vs running a SAS program, the behaviour is similar.
Which leads me to think that the SAS program that I run by-passes the cache on the 9176.
The conclusion that I have drawn sounds a bit ridiculous. I really need some third party view on this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 02:51 AM
01-15-2002 02:51 AM
Re: Disk Storage Performance Question
How is the SAS dataset connected to your computer? You mentioned that if you did a dd on the logical volume the performance was slow. If SAS points at the logical volume for it's storage then that may well be your answer.
IMHO RAID 5 is not a good choice for a database as it is likely to do random writes. RAID 5 will be about 1/2 the speed of one disk!! But if you really only do large writes (a large write is as big as one stripe) then you'll probably be OK
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 03:00 AM
01-15-2002 03:00 AM
Re: Disk Storage Performance Question
That is correct. To be even more sure, you could/should use rlvolxx (note leading "r") in your second test. For the 'r'/raw/character device, the system *does* not use the (filesystem) buffer cache. For a non-'r'/block device, the system *probably* does not use the buffer cache, but it is better to be sure and use 'r'.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 03:39 AM
01-15-2002 03:39 AM
Re: Disk Storage Performance Question
I have mentioned in my earlier posting, if I perform a dd if=filexx, I can achieve 40MB/s and when I perform a dd if=/dev/vgxx/lvolxx, I can only achieve 13MB/s.
The filexx is sitting on the /dev/vgxx/lvolxx volume.
My SAS program is accessing to filexx and it is performing at an IO rate of 13MB/s. But when I perform a cp filexx, I can achieve 40MB/s.
When the SAS program access to the file, it points to filexx and does not use the device name. So in this case, shouldn't it be performing at 40MB/s like the cp command ?
If I could, I would like to really do RAID0+1, but given the number disk and the amount of data to be stored, I have to settle for RAID5.
Hi Frank,
The cache that I am refering to is the read ahead cache on the 9176. If I disable the read ahead cache on the 9176, I will get about 13MB/s for the cp command.
Have tried tuning the server cache buffer, but did not help at all.
Any other suggestions ? Or anyone has run the above configuration but have much better performance than what I am getting ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 04:35 AM
01-15-2002 04:35 AM
Re: Disk Storage Performance Question
As far as the SAS thing I agree it sounds weird. To me it seems there is probably two reasons for this:
o SAS is actually trying to write to the logical volume & NOT the cooked VxFS filesystem. (You do use VxFS?). This means it bypasses HP-UX buffercache. You can sort of check this by looking at the SAS processes in glance with the "F" (open Files) option [if you have glance)]
....OR....
o SAS has some form of logging/other process this is slowing down the transfer. Is your system a logged system? I'm afraid I do not know about SAS, I'm Informix, so iI'm assuming SAS does something similar, most things do. If so, you might find that you are getting disk contension within your RAID5 LUN. How to check this .... I do not know ....You need an expert....Oh you already mentioned this...
All of the above is puerly speculative.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2002 08:09 AM
01-15-2002 08:09 AM
Re: Disk Storage Performance Question
Unfortunately, I've never experimented with a StorageTek 9176, but 10 36GB disks should have plenty of bandwidth to saturate a fiberchannel link, if you can get several of them going at a time.
What version of HP-UX are you running, and which filesystem patches (in particular, have you installed PHKL_23240 or PHKL_25022)?
How was your filesystem built and tuned? Can you post the output from
fstyp -v
vxtunefs
There are a couple of vxfs tunables (set on a per filesystem basis by vxtunefs) that are particularly important for sequential I/O. On HP-UX 11i and
vxfs 3.3:
read_nstream, write_nstream - by default, these are set to 1, meaning you don't get any readahead or writebehind. A record size of 1MB will by itself probably get more than one disk transferring at a time (this will depend on how you built the volume group on the LUNs in the disk array and how the array itself lays out data), but it will not be enough to saturate a fiberchannel.
discovered_direct_iosz - this tunable sets the record size at which vxfs switches (transparently) from normal buffered i/o to direct I/O, bypassing the buffer cache. While this eliminates the bcopy to move the data into the user program, and eliminates some OS overhead, it also prevents readahead (since there is no buffering to hold the data that the program hasn't actually requested yet). Try setting this parameter to something really big, like 16MB.
Note that tuning a filesystem for sequential, large block I/O won't generally give good performance for small block, random I/O like databases do.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2002 06:06 PM
01-23-2002 06:06 PM
Re: Disk Storage Performance Question
I don't have the 2 patches installed and I couldn't find the 2 patches that you mentioned in the HP patch download. Are these 2 patches meant for 11.00 ?
What does these 2 patches do ?
Following is the fstyp for 1 of my volume :
fstyp -v /dev/vg02/elvol1
vxfs
version: 4
f_bsize: 8192
f_frsize: 1024
f_blocks: 1024000
f_bfree: 885010
f_bavail: 829709
f_files: 233300
f_ffree: 221252
f_favail: 221252
f_fsid: 1073872897
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 6
f_size: 1024000
When I try to run vxtunefs on my file system, I get the following error message :
vxfs vxtunefs: Cannot get parameters for /devenv: Function is not available
In my testing, even when I use a smaller block size (eg 8k, 64k), I still get the same results.
I am in the process of working with StorageTek support on resolving this issue. If I any further progress, will post the result here.
Thanks for all who have replied.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-17-2002 11:35 AM
04-17-2002 11:35 AM
Re: Disk Storage Performance Question
For example 100Mbit ethernet theoretical's to 12.5MB/sec, and you are happy to see 10-11M/Sec.
Gigibit ethernet Theoreticals at 100meg/sec and you are happy to see only 60-70Meg/sec.
Same with 1GB FibreChannel. On HP I have seen max of 65-70Meg/sec and this depends on what type of storage device you are attached to.
If you are attached to large storage device such as EMC and pumping 70Meg/sec to it, and all the ports on that same bus are pumping at max throughput, the Bus on EMC side can be bottleneck.
As to why 13Meg/sec, its because you are using the block device of /dev/vgXX/lvolXX. This is normal. It has nothing to do with the cache on the array unit or UNIX FILESYSTEM buffer cache.
You should use the raw device if you want maximum throughput, this is the case for all rdbms.
Rebuild your database using raw devices, set the multiblock read to be whatever your underlying array unit likes best and be happy.