- Community Home
- >
- Servers and Operating Systems
- >
- Integrity Servers
- >
- Poor I/O through put From SAS Controller in server...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2008 05:42 AM
тАО03-02-2008 05:42 AM
Poor I/O through put From SAS Controller in server rx6600
In our rx6600 Server we were facing very poor performance mainly because of very high I/O.
Serverv Configuration
Software
OS -- HP Unix 11.23
DB -- Oracle 10G Database
Application -- T24 Application server (Temenos Core Banking Software)
Hardware
Number of CPUs = 8 (4 dual core)
Memory = 24 GB
HardDisks= 16 Hard Disks ( each one is 73GB SAS HardDisks)
While monitoring(glance) we are seeing the I/O activity is going 100% most of the time.
In the server we are having two SAS controller and each controller is connected by 8 Hard Disks.
We have configured ASM for Oracle.4 disks from SAS Controller1 and all 8 disks from SAS Controller2 is used for ASM.
In the attached Oracle Enterprise Manager output DATA_0000, DATA_0001, DATA_0002, DATA_0003 are disks which is connected to SAS Controller1 and DATA_0003,DATA_0004, REDO_0000, REDO_0001 are connected to SAS controller2.
Average Throughput (MB per second) for disks connected to 1st SAS controller is giving very poor throughput compared to the disks which is connected to the 2nd SAS Controller.
In ASM Configuration DATA_0000, DATA_0001, DATA_0005 are in one failure group and DATA_0002, DATA_0003, DATA_0006 are in second failure group.
What will be reason behind this poor throughput? Please help us in solving the issue?
Manoj K
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-02-2008 06:37 PM
тАО03-02-2008 06:37 PM
Re: Poor I/O through put From SAS Controller in server rx6600
can you share what raid levels are configured for the sas disks and how many luns are there. To check, run the following:
# ioscan -kfnd ciss
It should show you ciss
# saconfig /dev/ciss5
Based on my experience, this could be high IO requirements than you expected. It could be several factors, disk speed and type, saturate I/O bus etc....
For oracle, it would be good if you consider transfer redo/undo/archive logs to raid 1+0 volumes due to high I/O requirement for these files.
For the datafiles, raid 5 should be fine but its up to your requirement.
Post the following output for better understanding.
# ioscan -funC disk
# df -k
# iostat 1 100
Let us know.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-04-2008 09:09 PM
тАО03-04-2008 09:09 PM
Re: Poor I/O through put From SAS Controller in server rx6600
Thanks for your response and sorry for the delay to reply.
Server rx6600 is having SAS controller for connecting Internal disks (SAS Disks).
So the controller HW should be like /dev/sasd0 and there is a provision per SAS controller to do maximum two H/W mirroring of disks (two mirror-4disks).As i mentioned in my previous message, for oracle we are using ASM.In ASM we only need to mention the RAW DISKS.Rest all including the high availability is taking care by asm only. From os side need not be done anything on the ASM disks .
We are having TWO SAS controller and 16 internal disks on the system.
As per request all output is attached with this mail.Please find it.
Manoj K
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-13-2008 05:26 PM
тАО03-13-2008 05:26 PM
Re: Poor I/O through put From SAS Controller in server rx6600
yes. The c1t0d0 which you claimed to be raid 1 seems to have high I/O loads. Is there any filesystem residing in it? or is it your oracle raw device?
post output from :
# vgdisplay -v
Rgds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-14-2008 09:04 PM
тАО03-14-2008 09:04 PM
Re: Poor I/O through put From SAS Controller in server rx6600
Please find the attachemnt for full details.
Manoj K
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-18-2008 05:14 PM
тАО03-18-2008 05:14 PM
Re: Poor I/O through put From SAS Controller in server rx6600
I think you have the failure group NOT spread evenly across the two controllers.
Instead of posting anymore attachments how about filling up the following
controller 1
disk 1,2 IR volume 1: OS BOOT
disk 3 ASM: ????
disk 4 ASM: ????
disk 5 ASM: ????
disk 6 ASM: ????
disk 7 ASM or UNUSED: ????
disk 8 ASM or UNUSED: ????
controller 2
disk 1,2 IR volume 1: ????
disk 3,4 IR volume 1: ????
disk 5 ASM: ????
disk 6 ASM: ????
disk 7 ASM: ????
disk 8 ASM: ????
A better output would be from glance using the "v" or "u" screens for the disk output.
Also instead of using ASM failure groups did you consider using LVM mirrored pairs and putting the raw volumes on the raw logical volumes?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2008 01:39 AM
тАО05-10-2008 01:39 AM
Re: Poor I/O through put From SAS Controller in server rx6600
I am closing this thread, still amnot able to solve exact issue
Manoj K
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-11-2008 05:22 PM
тАО05-11-2008 05:22 PM
Re: Poor I/O through put From SAS Controller in server rx6600
pls post info as TTr requested. It would help to identify the root cause of the problem. Without comprehensive 'vgdisplay -v' output, its kinda hard to tell you. Yeah, you did give vgdisplay -v vg00 but that is not enought.
However, I've spent sometime studying your outputs. My bet seems you have huge i/o load on c1 controllers. Try to balance the IO load by considering migrating some of the vg to c2 controllers.
But it seems the I/O is high on c1t0d0 (value 4763-6xxx) which contain your OS. I've begin to suspect it may paging space issue or any of filesystems in vg00 is the culprit.
Post the following output:
# swapinfo -tam
# vmstat 1 20
# kctune
Rgds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-13-2008 05:12 AM
тАО05-13-2008 05:12 AM
Re: Poor I/O through put From SAS Controller in server rx6600
Thanks for the support you are giving for this issue.
Here am attaching the details which you have requested, the output is taken at the time of off-peak hour.
Thanks again and waiting for your reply.
Manoj K
Manoj K
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-27-2008 07:14 PM
тАО05-27-2008 07:14 PM
Re: Poor I/O through put From SAS Controller in server rx6600
quite busy with some work lately...
anyway, i take a look the output and it doesn't show relevant info which tally with what you've sent earlier (iostat and others). Maybe because it was run during off-peak hours?
Rgds