- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- throughput question
Operating System - HP-UX
1753797
Members
7810
Online
108799
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-19-2010 08:50 AM
тАО11-19-2010 08:50 AM
throughput question
server: rx6600
direct attached to P2000
Using only one controller
HBA 4gb.
created R5 lun and presented it to rx6600, created vg and lvol (no striping).
tried to load up the filesystem with dd copies and max throughput seems to be around 380mbps on the dd's both reads and writes.
What should I be seeing with this type of test?
also, as an aside If I run a long-running db query I never see the blks/s get above 7710
11:02:29 device %busy avque r+w/s blks/s avwait avserv
Average disk25 96.91 0.50 481 7710 0.00 2.02
it's almost like the db is not taking advantage of the speed of the array....
direct attached to P2000
Using only one controller
HBA 4gb.
created R5 lun and presented it to rx6600, created vg and lvol (no striping).
tried to load up the filesystem with dd copies and max throughput seems to be around 380mbps on the dd's both reads and writes.
What should I be seeing with this type of test?
also, as an aside If I run a long-running db query I never see the blks/s get above 7710
11:02:29 device %busy avque r+w/s blks/s avwait avserv
Average disk25 96.91 0.50 481 7710 0.00 2.02
it's almost like the db is not taking advantage of the speed of the array....
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-19-2010 11:44 AM
тАО11-19-2010 11:44 AM
Re: throughput question
Hi Charles,
> HBA 4gb.
> created R5 lun and presented it to rx6600,
> created vg and lvol (no striping).
> tried to load up the filesystem with dd
> copies and max throughput seems to be around
> 380mbps on the dd's both reads and writes.
> What should I be seeing with this type of
> test?
You mention a HBA speed of 4gbit.
If only 1 fc hba is connected then offcourse, the 380MByte/sec is the maximum performance that can be attained, as 4gbit/10 = max performance = 400Mbyte/sec.
> also, as an aside If I run a long-running
> db query I never see the blks/s get above
> 7710
> 11:02:29 device %busy avque r+w/s blks/s
> avwait avserv
> Average disk25 96.91 0.50 481 7710 0.00 2.02
Disk IO performance is mostly not "restricted" by the "hba speed", but instead by the "maximum IO per second".
So always check how many IOs that the system is doing and what the size (in kbyte) of the (average) IO is.
In this case, #of IO equals to 481, and the average size of the IO is 7710 (blocks/sec) /481 (#IO/sec) / 2 (1 block=512bytes) = 8k IOs.
If the IO has to come from the disks the R5 lun consists off, instead of the diskarray's cache, then the max # of IO equals to, # of disks of the lun * 110 IO/sec.
In the above case, the avserv is sufficiently low, 2.02 msec, that I would increase with scsimgr on 11.31 the max_q_depth for the disk25 lun, to see if more IO/sec can be reached without to much impacting avserv. (keep it lower then 10msec)
Greetz,
Chris
> HBA 4gb.
> created R5 lun and presented it to rx6600,
> created vg and lvol (no striping).
> tried to load up the filesystem with dd
> copies and max throughput seems to be around
> 380mbps on the dd's both reads and writes.
> What should I be seeing with this type of
> test?
You mention a HBA speed of 4gbit.
If only 1 fc hba is connected then offcourse, the 380MByte/sec is the maximum performance that can be attained, as 4gbit/10 = max performance = 400Mbyte/sec.
> also, as an aside If I run a long-running
> db query I never see the blks/s get above
> 7710
> 11:02:29 device %busy avque r+w/s blks/s
> avwait avserv
> Average disk25 96.91 0.50 481 7710 0.00 2.02
Disk IO performance is mostly not "restricted" by the "hba speed", but instead by the "maximum IO per second".
So always check how many IOs that the system is doing and what the size (in kbyte) of the (average) IO is.
In this case, #of IO equals to 481, and the average size of the IO is 7710 (blocks/sec) /481 (#IO/sec) / 2 (1 block=512bytes) = 8k IOs.
If the IO has to come from the disks the R5 lun consists off, instead of the diskarray's cache, then the max # of IO equals to, # of disks of the lun * 110 IO/sec.
In the above case, the avserv is sufficiently low, 2.02 msec, that I would increase with scsimgr on 11.31 the max_q_depth for the disk25 lun, to see if more IO/sec can be reached without to much impacting avserv. (keep it lower then 10msec)
Greetz,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-19-2010 11:52 AM
тАО11-19-2010 11:52 AM
Re: throughput question
Chris,
thanks for the info....this particular lun has 10 disks.
10*110 = 1100 Io's per second?
where does the 110 come from again?
Also, I've created an oracle data vg with one large LUN instead of several smaller luns. Should I be concerned with the avsrv time for this lun in this config?
thanks for the info....this particular lun has 10 disks.
10*110 = 1100 Io's per second?
where does the 110 come from again?
Also, I've created an oracle data vg with one large LUN instead of several smaller luns. Should I be concerned with the avsrv time for this lun in this config?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-20-2010 03:44 PM
тАО11-20-2010 03:44 PM
Re: throughput question
Hi Charles,
> 10*110 = 1100 Io's per second?
Something like that. The raidlevel also plays a role, but I dont have immediate rules of thumbs, how it impacts performance ;)
> where does the 110 come from again?
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=63977
"Remember, the average access time for a disk is about 8,000 to 10,000us (8-10milliseconds) "
8-10 milliseconds for a disk means 100-110 IO/sec . The above was for the fc disks of a va4710 diskarray..
More expensive "(fc) sas disks" do more IO per second (IOPS), something like 160-180 IO per second. (not to sure about the 160-180, could be more/less). And offcourse ssd do a lot more IOPS at least for reading, seen figures of over 2200+ IOPS. (but offcourse very expensive.. )
Greetz,
Chris
> 10*110 = 1100 Io's per second?
Something like that. The raidlevel also plays a role, but I dont have immediate rules of thumbs, how it impacts performance ;)
> where does the 110 come from again?
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=63977
"Remember, the average access time for a disk is about 8,000 to 10,000us (8-10milliseconds) "
8-10 milliseconds for a disk means 100-110 IO/sec . The above was for the fc disks of a va4710 diskarray..
More expensive "(fc) sas disks" do more IO per second (IOPS), something like 160-180 IO per second. (not to sure about the 160-180, could be more/less). And offcourse ssd do a lot more IOPS at least for reading, seen figures of over 2200+ IOPS. (but offcourse very expensive.. )
Greetz,
Chris
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP