- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: BEST performances i/o
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 03:49 AM
11-05-2001 03:49 AM
BEST performances i/o
4 disk (EMC)conected to the D390 machine.
One file system with distributed alocation on
this 4 disk.
the number i/o is > 200 /seg.
the i/o size <= 8k
more than 35000 files < 4k.
How i can improve performances ??
strinping ?? how much disk and trip size ??
another ??
thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 03:54 AM
11-05-2001 03:54 AM
Re: BEST performances i/o
Make sure that there is a relatively good traffic balance through all your io channels.
ioscan -fnkCdisk
and try to optimise the bus paths you access the disks through.. put half through 1 bus and the other half through the other bus.
This is done on vgcreate time.
Later,
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 03:59 AM
11-05-2001 03:59 AM
Re: BEST performances i/o
Tom.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 04:38 AM
11-05-2001 04:38 AM
Re: BEST performances i/o
High disk utilization
Large disk queue length
High %wio
Low buffer cache hit rates
Large run queue with idle CPU
Tuning a Disk I/O bound system
It is important to note, that at this point, there is no way to distinguish between an actual disk mechanism bottleneck and a bus controller bottleneck. That determination would require using devices external to the system. Any steps taken to alleviate disk bottlenecks should be taken with this in mind.
On a given device, determination of which partition is causing high IO can be done in glance, most easily on the IO by Logical Volume screen:
-If the bottleneck is in swap partitions (caused by an unavoidable memory bottleneck), use multiple swap areas of equal size and equal priority to balance the IO across multiple spindles/buses;
-Balance disk I/O across multiple spindles/buses;
-Tune (increase) the Buffer Cache size (this will help file systems only);
-If using Online JFS, use fsadm to tune the file systems;
-The more free space in a file system, the more seek efficient the data can be placed into the structure;
-Dedicate a group of disks to a particular application, then balance the IO for that application across this group of disks;
-Use mkfs(1m) options when creating file systems. A file system can be created with a particular use in mind. For example, a file system with lots of cylinder groups and lots of inodes is designed to place files in locations within the file system structure that will provide the most efficient seek times when there are many small files;
-Consider using asynchronous IO (kernel parameter fs_async). Note that asynchronous IO increases the chances of data loss in the event of a system crash;
-Consider using immediate reporting. Controlled by the scsictl command, or at 10.x, the kernel parameter default_disk_ir. This is also a very dangerous option which increases chances of data loss in the event of a system crash;
-Minimize symbolic links;
-When creating a file system, have the file system block size match the size of the files, if possible;
-Increase ninode, if long directory paths are commonly used to access files on the system
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 04:55 AM
11-05-2001 04:55 AM
Re: BEST performances i/o
When you are under load do sar -du 5 1 & post output. This will give an idea of how the disks are performing.
I'm assuming thet the EMC is RAID 1 already (not 5....)
As far as LVM striping is concerned you have a few stripe options
o Kilobyte stripe
lvcreate -L
For the stripe size you can do 4,8,16,32 & 64 KB. Bear in mind that you cannot do LVM mirroring (you do use H/W RAID 1 right!) or use pvmove for this option. So if you discover LV hotspots it will be difficult to move them to another disk.
o Extent based stripe
make a PVG with the disks in it
lvcreate -L
This will make a stripe width of one extent (usually 4MB)
Also Bills points about balancing the load of the disks over the controllers should also be noted & done alongside the above
In your original question you also gave a profile of what was on the disk. This is useful as you probably do not want a stripe size less than 8KB, but ALOT depends on how you use the data
o Sequential small writes/reads
o Sequential large writes/reads
o Random small writes/reads
o Random small re-writes/reads
A large read/write is 1 full stripe or more (say 8KB stripe width, 4 disks 4x8KB ==> 32KB+)
a small read/write is less than 1 stripe on one disk (say 8KB stripe ==> 8KB- )
Large Sequential ==> large stripe size
Random small ==> small stripe size
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 05:03 AM
11-05-2001 05:03 AM
Re: BEST performances i/o
I got my I's the wrong way 'round for kilobyte stripes!
lvcreate -L
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 05:22 AM
11-05-2001 05:22 AM
Re: BEST performances i/o
is the high i/o values only for One filesystem?? or is the i/o issue with all
the FS's on the EMC disk. Since you mentioned
only 4disks , i was just wondering this
is a single-FS issue or on the whole EMC i/o
channel.
If this is specifically with few filesystems, then look at what is sitting
on those filesystems. Generally, for Database
volumes, hotspots are a problem where particular spots get accessed more often than
the others. If so, you would need to work
with the DBA to spread that FS data onto
other volumes.
Also, look at the option of increasing
the FS block size of the filesystem (this
can be done only for new filesystems).
newfs -b
If the i/o connections are not using
alternate paths, try to implement them.
(vgextend
HTH
raj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2001 05:56 AM
11-05-2001 05:56 AM
Re: BEST performances i/o
This can often been seen for the /usr/sap/trans filesystem on SAP/R3 installations.
The command to get a fragmentation reports are
# fsadm -F vxfs -D -E /mnt
Defragmentation is done by
# fsadm -F vxfs -D -d -E -e /mnt
If you don't have Online JFS, you might consider to use HFS (note: extremely large fsck recovery times for so many files).
Carsten
In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move. -- HhGttG