HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- STrange behavior of LUNs in VA7410
Operating System - HP-UX
1834802
Members
2894
Online
110070
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-01-2005 02:12 AM
03-01-2005 02:12 AM
STrange behavior of LUNs in VA7410
I have the next LUNs, created in VA7410 - /dev/dsk/c8t0d0
/dev/dsk/c9t0d1
/dev/dsk/c8t0d6
/dev/dsk/c9t0d7
/dev/dsk/c8t1d0
/dev/dsk/c9t1d1
/dev/dsk/c9t0d0
/dev/dsk/c8t0d1
/dev/dsk/c9t0d6
/dev/dsk/c8t0d7
/dev/dsk/c9t1d0
/dev/dsk/c8t1d1
There is VG, created on them:
VG Name /dev/vg_raw
VG Write Access read/write
VG Status available
Max LV 50
Cur LV 30
Open LV 30
Max PV 16
Cur PV 6
Act PV 6
Max PE per PV 14080
VGDA 12
PE Size (Mbytes) 4
Total PE 84462
Alloc PE 70262
Free PE 14200
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
...
--- Physical volumes ---
PV Name /dev/dsk/c8t0d0
PV Name /dev/dsk/c9t0d0 Alternate Link
..........
PV Name /dev/dsk/c8t1d0
PV Name /dev/dsk/c9t1d0 Alternate Link
PV Status available
Total PE 14077
Free PE 7100
Autoswitch On
PV Name /dev/dsk/c9t1d1
PV Name /dev/dsk/c8t1d1 Alternate Link
PV Status available
Total PE 14077
Free PE 7100
Autoswitch On
During any r/w operations on volumes of this VG I see even (equal) distribution of r+w/s and blks/s values between LUNs that keep this VG, (shown by sar). But values %busy and avserv are different:
device %busy avque r+w/s blks/s avwait avserv
17:00:33 c8t1d0 70.10 1.04 597 4757 5.27 2.13
c9t1d1 97.01 1.11 578 4601 5.40 5.46
17:00:36 c8t1d0 64.88 0.66 886 7080 5.16 1.25
c9t1d1 99.67 0.74 882 7046 5.48 4.21
17:00:39 c8t1d0 51.16 1.49 428 3409 5.23 1.90
c9t1d1 98.67 1.57 441 3518 6.14 7.71
All volumes on thin VG were created with stripe over 2 LUNs. Why %busy and avserv are very different on neighbor LUNs, though number of blocks per secund are almost the same? Is it a problem?
Thank you
PS: all neighbor LUNs has equal size.
/dev/dsk/c9t0d1
/dev/dsk/c8t0d6
/dev/dsk/c9t0d7
/dev/dsk/c8t1d0
/dev/dsk/c9t1d1
/dev/dsk/c9t0d0
/dev/dsk/c8t0d1
/dev/dsk/c9t0d6
/dev/dsk/c8t0d7
/dev/dsk/c9t1d0
/dev/dsk/c8t1d1
There is VG, created on them:
VG Name /dev/vg_raw
VG Write Access read/write
VG Status available
Max LV 50
Cur LV 30
Open LV 30
Max PV 16
Cur PV 6
Act PV 6
Max PE per PV 14080
VGDA 12
PE Size (Mbytes) 4
Total PE 84462
Alloc PE 70262
Free PE 14200
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
...
--- Physical volumes ---
PV Name /dev/dsk/c8t0d0
PV Name /dev/dsk/c9t0d0 Alternate Link
..........
PV Name /dev/dsk/c8t1d0
PV Name /dev/dsk/c9t1d0 Alternate Link
PV Status available
Total PE 14077
Free PE 7100
Autoswitch On
PV Name /dev/dsk/c9t1d1
PV Name /dev/dsk/c8t1d1 Alternate Link
PV Status available
Total PE 14077
Free PE 7100
Autoswitch On
During any r/w operations on volumes of this VG I see even (equal) distribution of r+w/s and blks/s values between LUNs that keep this VG, (shown by sar). But values %busy and avserv are different:
device %busy avque r+w/s blks/s avwait avserv
17:00:33 c8t1d0 70.10 1.04 597 4757 5.27 2.13
c9t1d1 97.01 1.11 578 4601 5.40 5.46
17:00:36 c8t1d0 64.88 0.66 886 7080 5.16 1.25
c9t1d1 99.67 0.74 882 7046 5.48 4.21
17:00:39 c8t1d0 51.16 1.49 428 3409 5.23 1.90
c9t1d1 98.67 1.57 441 3518 6.14 7.71
All volumes on thin VG were created with stripe over 2 LUNs. Why %busy and avserv are very different on neighbor LUNs, though number of blocks per secund are almost the same? Is it a problem?
Thank you
PS: all neighbor LUNs has equal size.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2005 01:57 AM
03-02-2005 01:57 AM
Re: STrange behavior of LUNs in VA7410
Is the number of disks in each Redundancy Group are same? Also is both RGs of VA are equally utilised?
-Bonny-
-Bonny-
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2005 02:52 AM
03-02-2005 02:52 AM
Re: STrange behavior of LUNs in VA7410
Well, we have 24 and 21 disks in RGs. This could be source of that behavior, but it seems that it isn't.
device %busy avque r+w/s blks/s avwait avserv
c8t1d0 43.67 1.40 1422 10951 5.21 0.66
c9t1d1 89.33 1.63 1372 10934 6.18 2.60
24 and 21 are different on 13-15%, but %busy in this case is different by twice (43 and 89).
In commandView IO-rates (total IO-rate Graph) for both these LUNs are almost the same.
I dont know how to check utilization by RG.
It seems that sar is simply not quite accurate tool in this case.
device %busy avque r+w/s blks/s avwait avserv
c8t1d0 43.67 1.40 1422 10951 5.21 0.66
c9t1d1 89.33 1.63 1372 10934 6.18 2.60
24 and 21 are different on 13-15%, but %busy in this case is different by twice (43 and 89).
In commandView IO-rates (total IO-rate Graph) for both these LUNs are almost the same.
I dont know how to check utilization by RG.
It seems that sar is simply not quite accurate tool in this case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2005 03:05 AM
03-02-2005 03:05 AM
Re: STrange behavior of LUNs in VA7410
how have you striped over the 6 Luns & 6 alternate paths. remember the va7410 does internal striping, but the OS does not know about this, so it is perfectly possible for more of your LVs to be on c8t1d0.
The other thing (assuming the striping is even) is that c9t1d0 is MUCH slower than c8t1d0 (by a factor of 3-4). My guess is that RG2 (say) has had a disk failure and data is
o converting to RAID5DP,
o "make space" command is running,
o there is alot of data in RAID5DP.
Any of the above will hammer performance for that RG. First stop would be to run
armdsp -a
From this you may also look into running
armlog ...
prnlog ...
regards
Tim
The other thing (assuming the striping is even) is that c9t1d0 is MUCH slower than c8t1d0 (by a factor of 3-4). My guess is that RG2 (say) has had a disk failure and data is
o converting to RAID5DP,
o "make space" command is running,
o there is alot of data in RAID5DP.
Any of the above will hammer performance for that RG. First stop would be to run
armdsp -a
From this you may also look into running
armlog ...
prnlog ...
regards
Tim
-
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP