- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- DIsk Performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-01-2007 11:18 PM
08-01-2007 11:18 PM
DIsk Performance
They both have MSA 30 attached to tehm with a terrabyte of disk space.
05 is a newer faster machine. recently, one of the hard drives failed in the array. I replaced the drive in the array and rebuild the array. We are seeing substaintal differences in disk IO now.
If I run this
time dd if=/dev/zero of=./8gbfile bs=8192k count=204
on 03 I get results back in 2 seconds on 05 it takes 12 seconds.
Nothing else has changed on the server except the disks.
Any ideas on what I can do to improve performance. Why would 03 be faster that 05
here is how I build the new array:
In SAM:
Create Volume Group –scratchvg
Selected the disks
Maximum Physical Extents 17366
Maximum Logical Volumes 255
Maximum Physical Volumes 16
Physical Extent size (mbytes) 64
lvcreate -i 14 -I 64 -n scratchlv scratchvg
Logical volume "/dev/scratchvg/scratchlv" has been successfully
created with
character device "/dev/scratchvg/rscratchlv".
Volume Group configuration for /dev/scratchvg has been saved in
/etc/lvmconf/scratchvg.conf
export STRIPE='/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0
/dev/dsk/c5t1d0 /dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0
/dev/dsk/c5t3d0 /dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0
/dev/dsk/c5t5d0 /dev/dsk/c4t8d0 /dev/dsk/c5t8d0'
root@msc05-> dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0
/dev/dsk/c4t8d0 /dev/dsk/c5t8d0'
echo $STRIPE
/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0 /dev/dsk/c5t1d0
/dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0 /dev/dsk/c5t3d0
/dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0
/dev/dsk/c4t8d0 /dev/dsk/c5t8d0export STRIPE='/dev/dsk/c1t0d0
/dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0
/dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0
/dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0
/dev/dsk/c2t8d0'
lvextend -L 485632 /dev/scratchvg/scratchlv $STRIPE
Logical volume "/dev/scratchvg/scratchlv" has been successfully
> > extended.
Volume Group configuration for /dev/scratchvg has been saved in
/etc/lvmconf/scratchvg.conf
Mounted /scratch in SAM
/dev/scratchvg/scratchlv /scratch vxfs
rw,suid,largefiles,tmplog,mincache=tmpcache,nodatainlog 0
2/dev/scratchvg/scratchlv /scratch vxfs
rw,suid,largefiles,delaylog,datainlog 0 2
I also tried changing fs_async from 0 to 1 in the kernel.
The system uses these drives strictly as scratch space.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 12:03 AM
08-02-2007 12:03 AM
Re: DIsk Performance
you don"t mention anything about the hardware. are the old an the new disks identically?
Whats about RPM, Cache, ... Are there differences?
What about the time ut takes reading from the disks. Does it differ, too?
Your configuration seems OK to me.
Can you write and read small files several times from which you know, that they do not use any space on the exchanged disk?
Bye
Ralf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 12:30 AM
08-02-2007 12:30 AM
Re: DIsk Performance
ut? what?
Can you write and read small files several times from which you know, that they do not use any space on the exchanged disk?
As the volume is striped accros all drives, I can't write to one drive. That I know of.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 12:37 AM
08-02-2007 12:37 AM
Re: DIsk Performance
echo $STRIPE
/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0 /dev/dsk/c5t1d0
/dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0 /dev/dsk/c5t3d0
/dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0
/dev/dsk/c4t8d0 /dev/dsk/c5t8d0
export STRIPE='/dev/dsk/c1t0d0
/dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0
/dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0
/dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0
/dev/dsk/c2t8d0'
So many different device files?
On which devices did you create the VG?
(see vgdisplay -v)
What type of IO module is inside the MSA30?
(DB/MI)
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 12:48 AM
08-02-2007 12:48 AM
Re: DIsk Performance
So many different device files? - 14 disks in the array - 14 device files
On which devices did you create the VG? - It is striped across all 4.
What type of IO module is inside the MSA30? Not sure. they are the same on both servers. And this hasn't changed since the faled Hard drive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 12:52 AM
08-02-2007 12:52 AM
Re: DIsk Performance
(c1/c2 and c4/c5)
Have a look - the MI module has 4 connectors, the DB has only 2.
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 01:41 AM
08-02-2007 01:41 AM
Re: DIsk Performance
(c1/c2 and c4/c5)
I must have copied and pasted this wrong.
These are not in the system:
STRIPE='/dev/dsk/c1t0d0
/dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0
/dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0
/dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0
/dev/dsk/c2t8d0'
The connectors, is a DB as it has only 2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 01:44 AM
08-02-2007 01:44 AM
Re: DIsk Performance
VG Name /dev/vg00
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 11
Open LV 11
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 4328
VGDA 4
PE Size (Mbytes) 16
Total PE 8636
Alloc PE 5290
Free PE 3346
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/vg00/lvol1
LV Status available/syncd
LV Size (Mbytes) 304
Current LE 19
Allocated PE 38
Used PV 2
LV Name /dev/vg00/lvol2
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 256
Allocated PE 512
Used PV 2
LV Name /dev/vg00/lvol3
LV Status available/syncd
LV Size (Mbytes) 528
Current LE 33
Allocated PE 66
Used PV 2
LV Name /dev/vg00/lvol4
LV Status available/syncd
LV Size (Mbytes) 208
Current LE 13
Allocated PE 26
Used PV 2
LV Name /dev/vg00/lvol5
LV Status available/syncd
LV Size (Mbytes) 32
Current LE 2
Allocated PE 4
Used PV 2
LV Name /dev/vg00/lvol6
LV Status available/syncd
LV Size (Mbytes) 8000
Current LE 500
Allocated PE 1000
Used PV 2
LV Name /dev/vg00/lvol7
LV Status available/syncd
LV Size (Mbytes) 6336
Current LE 396
Allocated PE 792
Used PV 2
LV Name /dev/vg00/lvol8
LV Status available/syncd
LV Size (Mbytes) 7808
Current LE 488
Allocated PE 976
Used PV 2
LV Name /dev/vg00/apps
LV Status available/syncd
LV Size (Mbytes) 9008
Current LE 563
Allocated PE 563
Used PV 1
LV Name /dev/vg00/sysdata
LV Status available/syncd
LV Size (Mbytes) 5008
Current LE 313
Allocated PE 313
Used PV 1
LV Name /dev/vg00/dev_swap
LV Status available/syncd
LV Size (Mbytes) 16000
Current LE 1000
Allocated PE 1000
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c2t1d0s2
PV Status available
Total PE 4318
Free PE 735
Autoswitch On
PV Name /dev/dsk/c3t0d0s2
PV Status available
Total PE 4318
Free PE 2611
Autoswitch On
VG Name /dev/scratchvg
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 14
Act PV 14
Max PE per PV 17366
VGDA 28
PE Size (Mbytes) 64
Total PE 15190
Alloc PE 7588
Free PE 7602
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/scratchvg/scratchlv
LV Status available/syncd
LV Size (Mbytes) 485632
Current LE 7588
Allocated PE 7588
Used PV 14
--- Physical volumes ---
PV Name /dev/dsk/c4t0d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t1d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t2d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t3d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t4d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t5d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c4t8d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t0d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t1d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t2d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t3d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t4d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t5d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
PV Name /dev/dsk/c5t8d0
PV Status available
Total PE 1085
Free PE 543
Autoswitch On
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 03:35 AM
08-02-2007 03:35 AM
Re: DIsk Performance
The problem with measuring i/o like this and calling it disk i/o is that disk i/o is but one component. Trying to do this with cooked files may tell you more about buffer cache and/or mount options than anything else. The first thing that I would do is replace your cooked output file with a raw (character) device. An LVM raw device will be close enough for our purposes in that the LVM abstraction layer is all but zero. You should replace your 8gbfile with something like /dev/vg20/rlvol1. If you then see significant i/o performance differnces between the two arrays (and make several runs on each to get a meaningful mean value), you now have far greater confidence that the differences reside in the array although you do need to make sure that the scsi queue_depth is set the same for your disk devices. (e.g. scsictl -a /dev/rdsk/c1t6d0). Your test is using sequential i/o so the queue_depth could have profound effects if different. Man scsictl for details.