Simpler Navigation for Servers and Operating Systems
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
Operating System - Tru64 Unix
cancel
Showing results for 
Search instead for 
Did you mean: 

Collect I/O stats report

Christof Schoeman
Frequent Advisor

Collect I/O stats report

Hi

Attached is a graph for the I/O stats of a particular disk on my system, but something does not add up.

The graph shows a wait queue of nearly 6000, yet the number of writes per second is only about 600, and very little reads. My question is, where did the items in the queue come from?

There seems to be a much stronger correlation between the throughput (KB Written/Sec) and the wait queue, than there is between the Reads- or Writes/Sec and the wait queue.

Am I misinterpreting the stats? If so, your advice shall be greatly appreciated.

Regards
23 REPLIES
Venkatesh BL
Honored Contributor

Re: Collect I/O stats report

Did you try 'normalizing' the output?
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

Hi

After normalizing the graph, it is like counting as the Irish do - one, two, many, lots:-)

The graph now says that there were lots of writes, which resulted in lots of items in the wait queue, causing lots of throughput.

I am currently busy troubleshooting a performance issue that requires exact figures, but the stats don't add up.

Here is how I see it, but perhaps you can point out a flaw in my reasoning:
- Each read- and write I/O will be placed in the queue of a particular LUN, for processing.
- If the reads and writes come in faster than the device can process them, the queue will build up, resulting in delays.

Therefore, if there are 6000 items in the queue, there had to have been more than 6000 read plus writes, 'cause the LUN will continue processing them as they come in.

However, this is not what the "un-normalized" graph says.

Hope you can help.
Mark Poeschl_2
Honored Contributor

Re: Collect I/O stats report

I suspect what you're seeing reflects the fact that some collect data is always normalized over 1 second intervals and other data is an instantaneous snapshot. From the 'collect' man page:

" Normalization of Data

Where appropriate, data is presented in units per second. For example, disk
data such as kilobytes transferred, or the number of transfers, is always
normalized for 1 second. This happens no matter what time interval is
chosen. The same is true for the following data items:

+ CPU interrupts, system calls, and context switches.

+ Memory pages out, pages in, pages zeroed, pages reactivated, and pages
copied on write.

+ Network packets in, packets out, and collisions.

+ Process user and system time consumed.

Other data is recorded as a snapshot value. Examples of this are: free
memory pages, CPU states, disk queue lengths, and process memory."

So: Your I/O rates and throughput figures are one second averages, but the queue depth is an instantaneous snapshot. What interval are you using to collect this data? 'collect' really isn't the ideal tool for short-interval data collection like this. I find 'iostat' or 'advfsstat' (assuming you're on AdvFS) more uuseful.
Victor Semaska_3
Esteemed Contributor

Re: Collect I/O stats report

Christof,

Looking at the graph it's hard for me to tell which line represents what. I suggest you produce 3 graphs instead of one as follows:

Graph 1: Active Queue & Wait Queue
Graph 2: Reads/Sec & Writes/Sec
Graph 3: KB Read/Sec & KB Written/Sec

I suspect you may be interperting the lines incorrectly. What you think is the wait queue may actually be I/O per sec or KBs per sec.

Vic
There are 10 kinds of people, one that understands binary and one that doesn't.
Victor Semaska_3
Esteemed Contributor

Re: Collect I/O stats report

Forgot to mention, don't normalize the data when you produce the three graphs.

Vic
There are 10 kinds of people, one that understands binary and one that doesn't.
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

Hi

Did some further digging, but the thick only plottens:-(

Some background - the users sometimes complain that their actions take long to complete. This is also reflected in the Oracle database, where it sometimes has to wait up to 20 seconds for a transaction to complete, because it is waiting for an I/O.

So, I am trying to figure out what is happening on the I/O subsystem.

I saw queues forming on some disks, but no sudden burst of I/Os going to that disk that would cause the queue to build up. Wrote a little script that sends a single I/O to that disk, in 1 second intervals, just to see how long the I/O takes to complete. What I found was, that the I/O takes a fraction of a second to complete in most cases, but when there is a queue, the I/O could take up to 20 seconds to complete. Which ties in with what Oracle is experiencing.

I used iostat to collect information about the load on that disk, and used monitor to get the queue length (if you know of a better way to get queue length information, please let me know).

My question is - if very little I/Os are going to a disk, what can cause a queue to build up, so badly that it takes a single I/O 20 seconds to complete?

Long story, I know, and I hope it makes sense. Any help will be most welcome.
Ivan Ferreira
Honored Contributor

Re: Collect I/O stats report

If you are collecting statistics using collect, then you should have the "process" statistics (-s p). In the process statistics, you can see the IBk nad OBk column. That may guide you to find out the process that is doing most I/O.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

That is the problem. Nobody is generating excessive I/O, but yet a queue builds up.

All the disks in question, contain raw volumes, used by Oracle.

I'm not too comfortable with the queue stats, though. collect shows queues of up to 2000, where monitor only shows queues of 20 or so. Are there better ways of getting disk queue information?
Han Pilmeyer
Esteemed Contributor

Re: Collect I/O stats report

Doesn't sound like normal behavior. Perhaps you should start by describing the configuration and the version. Please don't forget to include information about the storage.
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

The system is running Tru64 V5.1B PK4 on a 12 CPU 24GB memory GS1280.

The storage sits on an EMC DMX box, with the data SRDF'ed to a remote site.

We have run similar stats collections on other systems that have storage on the same DMX, but they all look fine.

We also failed the system over to the remote site, but we get the same problem.

You are right, it sounds all wrong. I'm starting to doubt my stats collectors (iostat, collect and monitor).
Han Pilmeyer
Esteemed Contributor

Re: Collect I/O stats report

I wouldn't trust the queue depth from monitor. That program wasn't really maintained in this millenium and there were changes around the device statistics.

I just happened to verify that for the BL24 (PK3) release and newer that the storage device statistics are correct for collect.

Is it possible that the SRDF link is "stalled" when you see those high I/O queues?

I'm not sure that I could match the colors in the graph to the statistics. Could you perhaps present cfilt output for a similar event?

What does the DDR entry for the EMC device look like? I assume EMC configured that correctly for you, right?
Aco Blazeski
Regular Advisor

Re: Collect I/O stats report

Hi to everyone,

Possible cause for such behaviour can be synchronization of two DMX-es, that is SRDF.
Check if synchronization between two DMXes is sync or asych.

We also have GS80 connected to a Symmetrix box which has SRDF to a remote site. And when we turn the synchronization on we experience bad disk performance for those disks which are synchronizing.

Other systems on DMX that work fine at your site maybe are not synchronizing with remote site.

Also you can try to turn off completely synchronization between DMX boxes for a while and then check the performances.

Also as a step in troubleshooting could be to take a look on disk usage on DMX-es through EMC control center software, not from the server side (i.e. monitor, iostat, collect...)

Hope this will help
Regards
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

Hi

These are brilliant ideas. I know, 'cause I tried them as well:-)

Even with SRDF completely out of the picture (split), the problem still occurs.

DDR entries are correct (verified that).

The EMC engineers are about to start a trace on the FA's that this system is connected to. I'll also be collecting stats with iostat, monitor and collect, at 1 second intervals. I'll be posting some graphs soon.

Thanks for your help so far.
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

While we're waiting for EMC to analyze their trace info, here are some results.

I wrote a little script that send a single I/O to each disk and then measures the time it takes for the I/O to complete. In this short collection time, there was at least 1 I/O that took 8 seconds to complete. They sometimes take up to 20 seconds and will happen on each disk (independently) within 30 minutes. This I/O was issued at exactly 13:38:37 and completed 8 seconds later. Now look at the graphs...

There isn't much happening on the disk, but a queue length of 36 pops out of nowhere, the service times shoot up and the I/O takes forever to complete.

Oh the humanity!
Han Pilmeyer
Esteemed Contributor

Re: Collect I/O stats report

Can you post the results of a "hwmgr -show fibr -adapt"?
Ivan Ferreira
Honored Contributor

Re: Collect I/O stats report

Just to make sure, aren't you getting swapping/paging activities?.

Is the swap area OUT of the SAN? Swap devices should be located on local disks.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

Paging/swapping activity is negligible.

Adapters:

# hwmgr -show fibr -adapt

ADAPTER LINK LINK FABRIC SCSI CARD
HWID: NAME STATE TYPE STATE BUS MODEL
--------------------------------------------------------------------------------
786: emx7 up point-to-point attached scsi11 FCA-2384

Revisions: driver 2.14 firmware 1.90A4
FC Address: 0x6a0070
TARGET: -1
WWPN/WWNN: 1000-0000-c93e-60ae 2000-0000-c93e-60ae

ADAPTER LINK LINK FABRIC SCSI CARD
HWID: NAME STATE TYPE STATE BUS MODEL
--------------------------------------------------------------------------------
51: emx0 up point-to-point attached scsi3 FCA-2384

Revisions: driver 2.14 firmware 1.90A4
FC Address: 0x650071
TARGET: -1
WWPN/WWNN: 1000-0000-c93e-ca10 2000-0000-c93e-ca10

ADAPTER LINK LINK FABRIC SCSI CARD
HWID: NAME STATE TYPE STATE BUS MODEL
--------------------------------------------------------------------------------
928: emx9 up point-to-point attached scsi12 FCA-2384

Revisions: driver 2.14 firmware 1.90A4
FC Address: 0x21300
TARGET: -1
WWPN/WWNN: 1000-0000-c93e-615a 2000-0000-c93e-615a

ADAPTER LINK LINK FABRIC SCSI CARD
HWID: NAME STATE TYPE STATE BUS MODEL
--------------------------------------------------------------------------------
955: emx11 down scsi13 FCA-2354

Revisions: driver 2.14 firmware 3.92A2
FC Address: 0x0
TARGET: -1
WWPN/WWNN: 1000-0000-c931-4bb4 2000-0000-c931-4bb4

ADAPTER LINK LINK FABRIC SCSI CARD
HWID: NAME STATE TYPE STATE BUS MODEL
--------------------------------------------------------------------------------
960: emx13 up point-to-point attached scsi14 FCA-2384

Revisions: driver 2.14 firmware 1.90A4
FC Address: 0x6b0002
TARGET: -1
WWPN/WWNN: 1000-0000-c93e-61c2 2000-0000-c93e-61c2
Han Pilmeyer
Esteemed Contributor

Re: Collect I/O stats report

There's a firmware issue with most of HBA's that you use (FCA-2384). According to the reports I don't think the problem is what you describe, but you may want to upgrade the firmware nevertheless.

http://h20000.www2.hp.com/bizsupport/TechSupport/DriverDownload.jsp?pnameOID=341798&locale=en_US&taskId=135&prodTypeId=12169&prodSeriesId=341796&swEnvOID=1048

There is NO issue at all having swap on SAN storage.
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

The system is scheduled for a firmware and patch upgrade in the very near future.

Besides collect and monitor, is there another way to get disk queue information?
Rob Urban
Advisor

Re: Collect I/O stats report

I am the original author of collect. I haven't maintained it for years, but I still know something about the data-collection landscape at digi.., uh, compa, uh, HP. Collect isn't always correct, because it's situated directly on top of kernel data-structures, and they have a nasty habit of changing from release to release. However, mostly, collect is correct, and if collect can't give you the info, it's very unlikely that anything else can, with the exception of some subsystems that have data-extraction tools build by the same groups that develope the kernel subsystems, for example lsm and advfs. iostat was never very useful, because nobody loved it, and sar, sigh, was a miscarriage. When I left Digital ('99) monitor was already
falling behind with respect to Tru64.

Part of the problem with collecting I/O data is that it may not be properly maintained in the kernel, which means you're SOL not matter what tool you use.

Collect is the best tool for disk I/O.

cheers,

Rob Urban
Johnny Vergeer
Occasional Advisor

Re: Collect I/O stats report

After a hardware & firmware upgrade and PK5 install on the system, we had some spare time in the maintenance slot.

We ran the "I/O test" described above, and with the system totally idle, we recorded a worst case of 0.55 seconds during a 45 minute test period.

I would think that this is quite high for a system without any load?
Han Pilmeyer
Esteemed Contributor

Re: Collect I/O stats report

That's quite a better number than what you described before (between 8 and 20 seconds). I will agree with you that this is still not a number that we would like to see.

Could you give more details about the results of the test:
- You say that this is using the same test. Is this reproducable?
- Is it always on the same disk?
- Is the EMC also idle (during the maintenance window) or are other systems using it?
- Are the disks behind the LUN dedicated to the test?
- What did the EMC performance investigation reveal?
- etc.
Christof Schoeman
Frequent Advisor

Re: Collect I/O stats report

Hi

To answer some of your questions:
- Reproducable? Kind of. Under load, we see the 8 to 20 delays, every time. Without load, we don't see these delays.
- We see this on all the disks, at different times.
- No, the EMC is busy serving other systems all the time.
- No, the disks (spindles) serve other hosts as well.
- While our test reported an I/O that took about 10 seconds to complete, the EMC test didn't show any I/O that took more than a second to be served.

This issue has been escalated within HP. We are expecting some assistance with our investigation.

I'll be sure to post our findings.