- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: yet another disk io question
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2007 03:10 AM
02-01-2007 03:10 AM
I've been browsing the forums and gathering some data about how to "trace" an i/o problem with the disks.
There's a lot of questions about this, and after reading the responses i'm still a bit lost about how to trace my problem.
We have one DL380 with disks in RAID1, and RH 7.3 (old, i know). There's a custom application running on it, and the developers are "seeing" always a delay of about 4ms between every write they do on this app.
I'm trying to find if this is a O.S. problem or app problem, and i've started looking at io data by checking your posts about it.
Doing an iostat -x 300 10 i get these results:
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
/dev/cciss/c0d0
0.00 1.34 0.00 2.00 0.00 26.85 0.00 13.43 13.43 0.62 311.00 62.83 1.26
/dev/cciss/c0d0p5
0.00 1.34 0.00 2.00 0.00 26.85 0.00 13.43 13.43 0.62 311.00 62.83 1.26
/dev/cciss/c0d1
0.01 0.07 0.01 0.52 0.19 4.77 0.09 2.39 9.36 0.31 584.91 340.88 1.81
If i read this correctly, the mean await of about 0.300 seems to be too big?
CPU 99% idle, mem and net are Ok. CCISS version is 2.4.50 (doing a strings on cciss.o) on kernel 2.4.18-3smp.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2007 06:54 AM
02-01-2007 06:54 AM
Re: yet another disk io question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2007 07:45 PM
02-01-2007 07:45 PM
Re: yet another disk io question
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 0 4864 8320 206276 0 0 1 15 116 25 0 0 100
0 0 0 0 3944 8624 206692 0 0 2 16 119 58 0 0 100
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2007 09:28 PM
02-01-2007 09:28 PM
Re: yet another disk io question
You could test the 2.4.20 kernels from fedoralegacy.org
http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.athlon.rpm
http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i586.rpm
http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i686.rpm
You may find other important updates for the RedHat 7.3 box at the ftp.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2007 09:53 PM
02-01-2007 09:53 PM
Re: yet another disk io question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2007 12:47 AM
02-02-2007 12:47 AM
Re: yet another disk io question
What I would do is to test the performance of the raw device using the dd command or a software like iometer and try to stress your disks to identify the maximum I/O capabilities provided for the subsistem. If you can obtain high I/O rates with performance meassuring tools, then the problem should be traced from the application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2007 02:13 AM
02-02-2007 02:13 AM
Re: yet another disk io question
Again thanks a lot for your time, it's hard to get support for these kind of things.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2007 02:27 AM
02-02-2007 02:27 AM
Solutionhttp://www.mjmwired.net/kernel/Documentation/cciss.txt
You may try by tuning the file system, I don't know if in that version of kernel you can tune the I/O elevators, but se the article here:
http://www.redhat.com/magazine/008jun05/features/schedulers/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2007 05:03 AM
02-02-2007 05:03 AM
Re: yet another disk io question
time dd if=/dev/zero of=./rawfile bs=8k count=204800
Time:
real 1m0.838s
user 0m0.220s
sys 0m12.900s
1.6G rawfile
While vmstat 10 showed this:
io system cpu
bi bo in cs us sy id
0 11 116 32 0 1 99
0 13140 211 93 1 17 83
1 24780 237 64 0 22 77
1 25468 237 56 0 23 76
10 25449 244 79 1 23 76
4 26967 245 85 1 23 77
1 25205 257 81 0 24 76
5 23331 257 75 0 17 83
6 29 127 47 0 1 98
On a new and faster server i got around 80000 bo's. So, it seems a bit slow... i don't know, we're talking about 4 years old HD versus a new platform. I don't have much numbers to compare...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-02-2007 05:15 AM
02-02-2007 05:15 AM
Re: yet another disk io question
For a better test you should use a new partition and bypass the file system, with a command like this:
Write performance:
for I in `seq 5`; do
dd if=/dev/zero of=/dev/cciss/c0d0p6 bs=8k count=131072 &
done
Read performance:
for I in `seq 5`; do
dd if=/dev/cciss/c0d0p6 if=/dev/null bs=8k count=131072 &
done
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2007 08:24 PM
02-05-2007 08:24 PM
Re: yet another disk io question
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2007 09:38 PM
02-05-2007 09:38 PM