- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: EVA6000 Performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 03:52 AM
01-23-2006 03:52 AM
I am fairly new to EVAs (as you will see from my last post on the subject) and we are having performance problems.
We have migrated our main Oracle database from an XP512 onto an EVA6000.
On the XP we had 64 luns and after advice from this forum and from other people we migrated to 4 luns on the EVA, 2 on each controller routed through to 4 fibre cards in the host (a superdome), using PV failover not securepath or anything fancy...
We have a couple of problems:
1) While a business copy snap clone is running all disk operations on the source or target areas are really rally slow (one job that takes 15 minutes when it is run on it's own takes 1 hour and 20 minutes when a snap clone is running)
2) Certain operations on the database that ran fine when the disks were on the XP now run slowly on the EVA when there is a high volume of them running concurrantly. There is nothing I can see in Oracle statistics or in Unix performance monitoring (glance) show any performance problems.
Can anyone shed any light on this please? I know this is a bit of a woolly description so if you need any more information then please ask.
Thanks in advance
Chris
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 05:46 AM
01-23-2006 05:46 AM
Re: EVA6000 Performance
Please post results of
#sar -d 5 10
when:
1. business copy snap clone is running
2. business copy snap clone is running and other job is running (this: one job that takes 15 minutes when it is run on it's own takes 1 hour and 20 minutes when a snap clone is running)
3. system is idle,
and result of:
#lvdisplay /dev/vg_name/lvol_name
#fstype -v /dev/vg_name/lvol_name
#fcmsuutil /dev/td_number_of_HBA
and size of 4 luns??
Regards
LiPEnS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 06:00 AM
01-23-2006 06:00 AM
Re: EVA6000 Performance
fstyp and fcmsutil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 11:19 AM
01-23-2006 11:19 AM
Re: EVA6000 Performance
We went from a XP512 to an EVA5k. We do an awful lot of snap clones (over 2TB currently for 2 databases).
We start our deletes of the 2TB database at 2PM, and usually after 6PM the deletes are usually complete. The same process on the XP512 took an hour and a half to 2 hours.
I thought that the EVA6K was equipped with 3 phase snap creation. You might want to look into that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 06:22 PM
01-23-2006 06:22 PM
Re: EVA6000 Performance
When you create snapclones or when you delete them?
Please tell us!
Note: The creation should take only seconds!
And, you should almost never need to DELETE a clone; you can just EMPTY and reuse them!!
This also should only take between seconds and minutes!
Cheers
Peter
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 06:48 PM
01-23-2006 06:48 PM
Re: EVA6000 Performance
Each of the LUNs is 300GB
When the snap clone runs again tonight I will get you the results from sar... I can't get them from the job on it's own with no snap clone as it is a live service and I can't run the job stand alone.
The idle figures are...
HP-UX wlux01 B.11.11 U 9000/800 01/24/06
07:32:30 device %busy avque r+w/s blks/s avwait avserv
07:32:35 c0t13d0 8.00 0.50 11 48 4.11 14.38
c0t3d0 2.20 0.50 2 27 3.56 10.85
c3t13d0 6.00 0.50 9 38 4.39 10.55
c3t3d0 1.20 0.50 2 24 3.13 9.88
c26t8d0 28.40 0.50 64 1030 5.26 4.44
c28t8d1 27.40 0.50 103 1654 4.94 2.70
c31t8d2 22.20 0.50 104 1664 4.90 2.13
c33t8d3 11.80 0.50 81 1296 4.82 1.49
07:32:40 c0t13d0 3.80 0.50 5 30 4.43 13.16
c0t3d0 3.20 0.50 4 50 4.87 9.45
c3t13d0 2.20 0.50 4 24 4.65 11.25
c3t3d0 2.20 0.50 3 48 4.56 7.34
c26t8d0 28.80 0.50 197 3157 4.93 1.53
c28t8d1 21.00 0.50 146 2342 4.78 1.58
c31t8d2 20.40 0.50 112 1795 4.99 1.77
c33t8d3 19.00 0.50 119 1904 5.03 1.60
07:32:45 c0t13d0 3.80 0.50 5 35 3.92 14.34
c0t3d0 2.00 0.50 3 34 3.17 11.35
c3t13d0 2.00 0.50 3 26 3.94 11.48
c3t3d0 0.80 0.50 2 32 3.44 6.17
c26t8d0 21.80 0.50 117 1872 5.07 2.02
c28t8d1 21.60 0.50 93 1488 4.71 2.32
c31t8d2 23.60 0.50 97 1552 4.82 2.42
c33t8d3 25.80 0.50 77 1229 4.54 3.39
07:32:50 c0t13d0 3.60 0.50 5 30 3.22 14.49
c0t3d0 2.00 0.50 3 31 4.02 10.53
c3t13d0 2.20 0.50 4 24 3.30 13.80
c3t3d0 1.40 0.50 2 29 4.18 8.84
c26t8d0 24.00 0.50 167 2653 5.06 1.49
c28t8d1 28.40 0.50 128 2042 4.65 2.27
c31t8d2 18.60 0.50 123 1965 5.17 1.63
c33t8d3 23.80 0.50 178 2842 5.05 1.38
07:32:55 c0t13d0 6.19 0.50 9 57 4.06 17.14
c0t3d0 2.20 0.50 3 37 3.82 9.78
c3t13d0 3.59 0.50 6 46 4.34 10.33
c3t3d0 1.60 0.50 3 36 3.45 7.97
c26t8d0 24.15 0.50 79 1260 5.52 3.02
c28t8d1 29.74 0.50 81 1290 4.95 3.85
c31t8d2 26.95 0.50 67 1073 5.09 4.16
c33t8d3 11.78 0.50 50 802 4.67 2.44
07:33:00 c0t13d0 2.20 0.50 3 23 3.93 11.49
c0t3d0 1.60 0.50 2 34 3.25 8.60
c3t13d0 1.80 0.50 2 18 4.13 11.16
c3t3d0 1.40 0.50 2 34 2.86 8.85
c33t0d4 0.20 0.50 0 6 5.66 0.37
c26t8d0 44.80 0.50 64 1022 4.93 6.82
c28t8d1 32.40 0.50 46 730 4.60 7.56
c31t8d2 19.00 0.50 26 410 5.30 7.33
c33t8d3 3.20 0.50 3 42 5.87 10.82
07:33:05 c0t13d0 6.41 0.50 9 50 3.94 15.77
c0t3d0 23.25 0.50 42 105 5.15 6.81
c3t13d0 4.41 0.50 6 39 4.10 11.64
c3t3d0 20.04 0.50 42 103 5.15 4.69
c26t8d0 26.25 0.50 141 2264 5.15 1.80
c28t8d1 15.23 0.50 131 2097 5.01 1.15
c31t8d2 23.65 0.50 163 2607 5.18 1.37
c33t8d3 31.66 0.50 172 2748 5.28 1.85
07:33:10 c0t13d0 1.80 0.50 2 13 4.42 12.54
c0t3d0 0.80 0.50 3 34 3.14 6.59
c3t13d0 1.20 0.50 1 10 4.19 13.17
c3t3d0 0.80 0.50 2 31 3.56 5.96
c26t8d0 22.55 0.50 149 2374 4.91 1.58
c28t8d1 16.97 0.50 153 2456 5.16 1.09
c31t8d2 34.33 0.50 143 2283 4.83 2.42
c33t8d3 21.16 0.50 144 2296 5.07 1.45
07:33:15 c0t13d0 1.60 0.50 2 14 3.48 9.65
c0t3d0 1.00 0.50 1 19 2.96 10.42
c3t13d0 0.60 0.50 1 10 4.91 7.22
c3t3d0 0.60 0.50 1 19 2.97 7.55
c26t8d0 32.26 0.50 204 3259 5.07 1.62
c28t8d1 15.83 0.50 143 2296 4.93 1.10
c31t8d2 26.85 0.50 139 2232 4.99 1.94
c33t8d3 20.24 0.50 167 2671 4.93 1.22
07:33:20 c0t13d0 1.40 0.50 2 15 4.47 9.07
c0t3d0 2.20 0.50 3 39 2.50 8.81
c3t13d0 1.00 0.50 1 12 4.67 8.87
c3t3d0 1.60 0.50 3 37 2.63 7.71
c26t8d0 14.60 0.50 29 461 4.87 5.56
c28t8d1 19.40 0.50 47 758 5.27 3.89
c31t8d2 34.40 0.50 58 922 5.17 5.88
c33t8d3 30.80 0.50 40 643 5.01 7.50
Average c0t13d0 3.88 0.50 5 32 4.00 14.27
Average c0t3d0 4.04 0.50 7 41 4.57 7.77
Average c3t13d0 2.50 0.50 4 24 4.22 11.11
Average c3t3d0 3.16 0.50 6 39 4.60 5.69
Average c26t8d0 26.76 0.50 121 1935 5.06 2.25
Average c28t8d1 22.80 0.50 107 1715 4.90 2.18
Average c31t8d2 25.00 0.50 103 1650 5.02 2.42
Average c33t8d3 19.92 0.50 103 1647 5.00 1.94
Average c33t0d4 0.02 0.50 0 1 5.66 0.37
lvdisplay gives us:
--- Logical volumes ---
LV Name /dev/vgalpha/alp01
VG Name /dev/vgalpha
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 76288
Current LE 9536
Allocated PE 9536
Stripes 4
Stripe Size (Kbytes) 1024
Bad block on
Allocation strict
IO Timeout (Seconds) default
--- Logical volumes ---
LV Name /dev/vgalpha/alp02
VG Name /dev/vgalpha
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 76288
Current LE 9536
Allocated PE 9536
Stripes 4
Stripe Size (Kbytes) 1024
Bad block on
Allocation strict
IO Timeout (Seconds) default
And so onâ ¦ there are 16 file systemsâ ¦.
fstyp shows:
wlux01 / # fstyp -v /dev/vgalpha/alp01
vxfs
version: 4
f_bsize: 8192
f_frsize: 8192
f_blocks: 9764864
f_bfree: 3551250
f_bavail: 3551245
f_files: 200
f_ffree: 168
f_favail: 168
f_fsid: 1074790401
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 7
f_size: 9764864
wlux01 / # fstyp -v /dev/vgalpha/alp02
vxfs
version: 4
f_bsize: 8192
f_frsize: 8192
f_blocks: 9764864
f_bfree: 3854234
f_bavail: 3854229
f_files: 200
f_ffree: 168
f_favail: 168
f_fsid: 1074790402
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 7
f_size: 9764864
and so on for the 16 file systems
fcms util for the 4 cards shows:
Vendor ID is = 0x001077
Device ID is = 0x002312
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x0012ba
PCI Mode = PCI 66 MHz
ISP Code version = 3.3.150
ISP Chip version = 3
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0x0c0400
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x50060b0000325f3b
N_Port Port World Wide Name = 0x50060b0000325f3a
Switch Port World Wide Name = 0x2001000dec06e4c0
Switch Node World Wide Name = 0x2001000dec06e4c1
Driver state = ONLINE
Hardware Path is = 4/0/4/0/1
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x
x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:
27:37
Vendor ID is = 0x001077
Device ID is = 0x002312
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x0012ba
PCI Mode = PCI 66 MHz
ISP Code version = 3.3.150
ISP Chip version = 3
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0x0c0400
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x50060b00002df16d
N_Port Port World Wide Name = 0x50060b00002df16c
Switch Port World Wide Name = 0x2002000dec06e480
Switch Node World Wide Name = 0x2001000dec06e481
Driver state = ONLINE
Hardware Path is = 6/0/4/0/0
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x
x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:
27:37
Vendor ID is = 0x001077
Device ID is = 0x002312
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x0012ba
PCI Mode = PCI 66 MHz
ISP Code version = 3.3.150
ISP Chip version = 3
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0x0c0300
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x50060b00002df16f
N_Port Port World Wide Name = 0x50060b00002df16e
Switch Port World Wide Name = 0x2002000dec06e4c0
Switch Node World Wide Name = 0x2001000dec06e4c1
Driver state = ONLINE
Hardware Path is = 6/0/4/0/1
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x
x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:
27:37
Vendor ID is = 0x001077
Device ID is = 0x002312
PCI Sub-system Vendor ID is = 0x00103c
PCI Sub-system ID is = 0x0012ba
PCI Mode = PCI 66 MHz
ISP Code version = 3.3.150
ISP Chip version = 3
Topology = PTTOPT_FABRIC
Link Speed = 2Gb
Local N_Port_id is = 0x0c0500
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x50060b0000325f39
N_Port Port World Wide Name = 0x50060b0000325f38
Switch Port World Wide Name = 0x2001000dec06e480
Switch Node World Wide Name = 0x2001000dec06e481
Driver state = ONLINE
Hardware Path is = 4/0/4/0/0
Maximum Frame Size = 2048
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x
x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:
Thanks again for your help.
Chris.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 06:51 PM
01-23-2006 06:51 PM
Re: EVA6000 Performance
We don't delete our snap clones - just convert them to containers, this takes a few seconds to do.
The problem we see is after the snap clone has been created and presented to a host it continues to build in the background. This process seems to get more time allocated to it than the host requests. This process takes about 2 hours.
Chris.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2006 06:55 PM
01-23-2006 06:55 PM
Re: EVA6000 Performance
You are right we don't delete the snap clones, just convert them to containers.
The snap clone creation to a point where we can present it to a host does take seconds but it then carries on building the snap clone in the background. But it the copy process seems to get a higher priority than any host requests.
Chris.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2006 05:32 AM
01-24-2006 05:32 AM
Re: EVA6000 Performance
sar -d show that local disk (cXt13d0 and cXt3d0) go correctly and avserv > avwait. For luns on EVA avserv < avwait and this show that on this disk is a bottleneck.
Please, post more info
#scsictl -a /dev/rdsk/cXtYdZ_luns_on_eva
#vxtunefs -p /mont_point_of_eva_filesystems
#mount -v
#swlist |grep -i OnLineJFS
database on raw device/file systems ??
4 luns belong to one vg with stripes??
which level of protection/RAID on EVA??
how many disk group on EVA??
which version of CV EVA (command view)??
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2006 08:37 PM
01-24-2006 08:37 PM
Re: EVA6000 Performance
HEre is the sar information you wanted...
Firstly with a business copy running in the background on the eva and minimal work on the host:
HP-UX wlux01 B.11.11 U 9000/800 01/24/06
18:29:13 device %busy avque r+w/s blks/s avwait avserv
18:29:18 c0t13d0 3.40 0.50 5 28 4.26 10.12
c0t3d0 2.00 0.50 2 34 3.18 11.64
c3t13d0 2.60 0.50 3 21 4.92 9.62
c3t3d0 1.40 0.50 2 34 3.32 11.01
c33t8d3 2.60 0.50 8 128 5.40 3.03
18:29:23 c0t13d0 3.39 0.50 5 21 4.22 10.45
c0t3d0 3.39 0.50 4 40 4.46 10.11
c3t13d0 2.99 0.50 4 17 4.31 9.37
c3t3d0 2.79 0.50 3 36 4.75 9.53
c33t8d3 0.20 0.50 8 128 4.96 0.48
c32t8d2 0.20 0.50 0 3 5.87 13.97
18:29:28 c0t13d0 3.40 0.50 4 22 4.37 11.30
c0t3d0 0.80 0.50 1 16 3.07 12.47
c3t13d0 2.40 0.50 3 17 4.69 10.62
c3t3d0 0.60 0.50 1 16 3.08 7.11
c33t8d3 0.20 0.50 7 115 4.29 0.30
c32t8d2 0.20 0.50 0 3 7.51 12.39
18:29:33 c0t13d0 5.20 0.50 9 46 4.56 12.89
c0t3d0 2.80 0.50 4 38 4.04 8.54
c3t13d0 4.00 0.50 7 40 4.77 10.87
c3t3d0 2.00 0.50 3 36 3.67 9.13
18:29:38 c0t13d0 4.21 0.50 5 32 4.14 12.57
c0t3d0 4.41 0.50 5 44 3.68 12.94
c3t13d0 2.61 0.50 4 26 4.04 13.12
c3t3d0 1.80 0.50 4 38 4.01 9.35
c27t0d2 0.20 0.50 1 18 2.88 1.66
c33t8d3 0.40 0.50 8 131 5.02 0.30
c32t8d2 0.20 0.50 0 3 3.07 10.78
18:29:43 c0t13d0 4.19 0.50 5 26 4.25 10.99
c0t3d0 1.40 0.50 2 34 3.18 10.35
c3t13d0 2.99 0.50 4 22 4.13 10.30
c3t3d0 1.20 0.50 2 34 3.32 8.32
c26t8d0 0.20 0.50 0 6 9.59 0.24
18:29:48 c0t13d0 2.61 0.50 4 20 3.18 12.30
c0t3d0 1.20 0.50 2 34 2.68 8.30
c3t13d0 2.21 0.50 3 16 3.35 12.61
c3t3d0 1.20 0.50 2 34 2.69 8.47
c33t8d3 0.20 0.50 8 138 5.01 0.31
c32t8d2 0.60 0.50 0 3 5.84 26.91
18:29:53 c0t13d0 7.19 0.50 8 32 5.20 9.66
c0t3d0 3.79 0.50 5 32 3.84 11.52
c3t13d0 4.79 0.50 6 22 5.53 7.34
c3t3d0 2.00 0.50 2 21 4.56 10.52
c33t8d3 1.40 0.50 8 141 5.47 1.62
c32t8d2 0.60 0.50 0 3 9.19 26.36
18:29:58 c0t13d0 1.60 0.50 2 10 4.44 9.98
c0t3d0 2.59 0.50 3 40 3.36 10.53
c3t13d0 1.40 0.50 1 8 4.61 10.94
c3t3d0 1.60 0.50 3 37 3.01 10.25
c33t8d3 0.40 0.50 8 131 5.73 0.31
18:30:03 c0t13d0 43.20 0.50 73 507 5.20 43.96
c0t3d0 47.20 0.50 68 322 5.11 20.27
c3t13d0 34.80 0.50 65 469 5.16 43.06
c3t3d0 33.60 0.50 62 295 5.18 12.72
c26t0d1 0.20 0.50 11 131 4.54 0.25
c27t0d2 0.60 0.50 2 29 4.18 3.06
c26t8d0 0.20 0.50 1 11 5.78 0.22
c33t8d3 1.00 0.50 10 170 5.10 0.94
c32t8d2 0.40 0.50 0 3 0.08 22.40
Average c0t13d0 7.84 0.50 12 74 4.89 31.24
Average c0t3d0 6.96 0.50 10 63 4.68 17.43
Average c3t13d0 6.08 0.50 10 66 4.96 31.63
Average c3t3d0 4.82 0.50 8 58 4.79 11.83
Average c33t8d3 0.64 0.50 6 108 5.13 0.91
Average c32t8d2 0.22 0.50 0 2 5.26 18.80
Average c27t0d2 0.08 0.50 0 5 3.68 2.52
Average c26t8d0 0.04 0.50 0 2 6.87 0.23
Average c26t0d1 0.02 0.50 1 13 4.54 0.25
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2006 08:38 PM
01-24-2006 08:38 PM
Re: EVA6000 Performance
Secondly here is the one with a business copy running in the bacground and the long running job running:
HP-UX wlux01 B.11.11 U 9000/800 01/24/06
19:01:39 device %busy avque r+w/s blks/s avwait avserv
19:01:44 c0t13d0 7.39 0.50 4 20 3.71 23.44
c0t3d0 1.20 0.50 3 33 2.64 6.66
c3t13d0 6.39 0.50 4 16 4.00 22.97
c3t3d0 1.20 0.50 2 32 2.84 6.80
c26t0d1 90.62 0.50 234 6111 5.21 16.19
c32t8d2 96.01 0.50 16 255 4.87 62.15
19:01:49 c0t13d0 2.79 0.50 4 21 4.25 12.42
c0t3d0 1.39 0.50 2 27 3.30 8.15
c3t13d0 2.59 0.50 3 17 4.63 14.44
c3t3d0 1.00 0.50 2 26 3.49 6.73
c26t0d1 98.80 0.50 82 2130 4.77 56.78
c27t0d2 0.20 0.50 1 11 4.80 0.32
c26t8d0 6.37 0.50 40 645 4.92 1.47
c33t8d3 45.82 0.50 27 430 5.11 16.78
c27t8d1 11.95 0.50 27 427 5.51 4.65
c32t8d2 37.05 0.50 12 194 5.07 34.39
19:01:54 c0t13d0 4.01 0.50 7 30 5.37 6.69
c0t3d0 2.00 0.50 4 44 2.85 9.46
c3t13d0 3.61 0.50 6 28 5.42 7.25
c3t3d0 1.60 0.50 3 41 2.82 7.21
c26t0d1 10.82 0.50 37 1008 5.10 17.08
c26t8d0 0.80 0.50 11 147 3.71 0.98
c32t8d2 99.80 0.50 12 196 4.98 80.88
19:01:59 c0t13d0 5.40 0.50 11 96 4.85 10.55
c0t3d0 1.40 0.50 3 30 2.66 8.05
c3t13d0 4.00 0.50 7 50 4.81 10.58
c3t3d0 1.00 0.50 2 29 2.76 7.42
c26t8d0 0.60 0.50 4 59 4.99 1.29
c33t8d3 0.80 0.50 1 10 4.80 9.92
c27t8d1 2.00 0.50 3 54 4.97 6.45
c32t8d2 98.80 0.50 10 160 6.25 102.85
19:02:04 c0t13d0 6.39 0.50 9 57 5.10 11.77
c0t3d0 19.96 0.50 34 109 4.99 7.48
c3t13d0 4.19 0.50 5 44 5.62 10.34
c3t3d0 14.97 0.50 33 103 5.06 5.66
c26t8d0 8.98 0.50 29 458 4.09 3.18
c33t8d3 28.14 0.50 25 406 5.02 11.25
c27t8d1 59.48 0.50 31 495 5.00 19.98
c32t8d2 9.98 0.50 3 45 3.58 27.18
19:02:09 c0t13d0 2.81 0.50 4 25 4.11 10.73
c0t3d0 1.80 0.50 4 48 3.42 7.34
c3t13d0 2.20 0.50 3 19 4.22 11.46
c3t3d0 1.40 0.50 3 45 3.45 6.48
c26t8d0 0.20 0.50 3 55 5.75 0.55
c33t8d3 3.41 0.50 0 6 6.48 80.76
c27t8d1 27.66 0.50 13 215 5.45 23.59
c32t8d2 87.58 0.50 6 103 6.22 137.43
19:02:14 c0t13d0 3.59 0.50 5 32 4.43 14.14
c0t3d0 0.20 0.50 1 16 2.81 5.25
c3t13d0 2.59 0.50 4 24 5.03 9.00
c3t3d0 0.20 0.50 1 16 2.82 5.87
c26t8d0 1.20 0.50 1 8 6.07 18.58
c32t8d2 99.40 0.50 12 195 4.84 78.62
19:02:19 c0t13d0 6.00 0.50 9 66 3.70 20.91
c0t3d0 1.40 0.50 3 37 3.39 6.61
c3t13d0 4.20 0.50 7 58 3.66 16.29
c3t3d0 0.80 0.50 2 35 3.32 7.11
c27t0d2 0.20 0.50 1 19 5.28 0.26
c26t8d0 9.60 0.50 13 214 5.50 6.96
c33t8d3 18.20 0.50 26 410 4.55 7.22
c27t8d1 0.40 0.50 9 154 4.84 0.25
c32t8d2 69.20 0.50 9 150 4.72 79.88
19:02:24 c0t13d0 3.60 0.50 5 29 3.52 11.05
c0t3d0 0.60 0.50 2 32 2.55 6.16
c3t13d0 3.20 0.50 4 22 3.74 13.03
c3t3d0 0.40 0.50 2 32 2.57 5.23
c26t8d0 13.00 0.50 13 211 4.80 10.38
c27t8d1 52.60 0.50 34 557 4.81 15.66
c32t8d2 31.20 0.50 3 42 7.09 118.40
19:02:29 c0t13d0 6.81 0.50 10 88 4.09 15.09
c0t3d0 2.00 0.50 3 25 5.11 12.18
c3t13d0 4.01 0.50 7 77 4.60 12.56
c3t3d0 1.00 0.50 2 22 4.95 7.60
c27t0d2 0.20 0.50 1 10 4.15 0.27
c27t8d1 1.60 0.50 7 119 3.78 2.60
c32t8d2 100.00 0.50 2 38 5.72 411.39
Average c0t13d0 4.88 0.50 7 46 4.38 13.60
Average c0t3d0 3.20 0.50 6 40 4.29 7.71
Average c3t13d0 3.70 0.50 5 35 4.61 12.47
Average c3t3d0 2.36 0.50 5 38 4.37 6.09
Average c26t0d1 20.07 0.50 35 926 5.10 25.69
Average c32t8d2 72.89 0.50 9 138 5.20 85.73
Average c27t0d2 0.06 0.50 0 4 4.87 0.28
Average c26t8d0 4.08 0.50 11 180 4.69 3.58
Average c33t8d3 9.66 0.50 8 126 4.91 12.18
Average c27t8d1 15.57 0.50 12 202 5.02 13.05
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-24-2006 09:09 PM
01-24-2006 09:09 PM
Re: EVA6000 Performance
SCSICTL:
/dev/rdsk/c26t8d0
immediate_report = 0; queue_depth = 128
/dev/rdsk/c33t8d3
immediate_report = 0; queue_depth = 128
/dev/rdsk/c27t8d1
immediate_report = 0; queue_depth = 128
/dev/rdsk/c32t8d2
immediate_report = 0; queue_depth = 128
wlux01 / #
VXTUNEFS:
Filesystem i/o parameters for /data/ALPHA/dbs01
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192
Filesystem i/o parameters for /data/ALPHA/dbs02
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192
and so on up to 16 FS
MOUNT -V (for the database file systems)
/dev/vgalpha/alp01 on /data/ALPHA/dbs01 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:36 2006
/dev/vgalpha/alp02 on /data/ALPHA/dbs02 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:41 2006
/dev/vgalpha/alp03 on /data/ALPHA/dbs03 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:46 2006
/dev/vgalpha/alp04 on /data/ALPHA/dbs04 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:49 2006
/dev/vgalpha/alp05 on /data/ALPHA/dbs05 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:51 2006
/dev/vgalpha/alp06 on /data/ALPHA/dbs06 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:54 2006
/dev/vgalpha/alp07 on /data/ALPHA/dbs07 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:56 2006
/dev/vgalpha/alp08 on /data/ALPHA/dbs08 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:04:58 2006
/dev/vgalpha/alp09 on /data/ALPHA/dbs09 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:01 2006
/dev/vgalpha/alp10 on /data/ALPHA/dbs10 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:07 2006
/dev/vgalpha/alp11 on /data/ALPHA/dbs11 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:12 2006
/dev/vgalpha/alp12 on /data/ALPHA/dbs12 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:14 2006
/dev/vgalpha/alp13 on /data/ALPHA/dbs13 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:17 2006
/dev/vgalpha/alp14 on /data/ALPHA/dbs14 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:19 2006
/dev/vgalpha/alp15 on /data/ALPHA/dbs15 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:21 2006
/dev/vgalpha/alp16 on /data/ALPHA/dbs16 type vxfs nosuid,delaylog,convosync=direct on Wed Jan 25 01:05:23 2006
SWLIST:
wlux01 / # swlist | grep -i online
B3929CA B.11.11 HP OnLineJFS
The database is on filesystems
Yes the 4 luns belong to the same VG and all of the LVs are striped across them
we are on VRAID5 on the EVA
Command view is version 4.1.1.3 and we are on XCS revision 5030.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-25-2006 11:27 PM
01-25-2006 11:27 PM
Re: EVA6000 Performance
Thing which it was been possible to make in system (on host), which sure they will improve bit performance:
1) Change following parameters of file systems (vxfs) in accordance with recommendations (http://docs.hp.com/en/B3929-90011/ch05s07.html and http://h71028.www7.hp.com/ERC/downloads/4AA0-2030ENW.pdf) - in this case: read_pref_io=1024Kb, read_nstream=4, write_nstream=1. Make this online across vxtunefs command or offline across file /etc/vx/tunefstab.
2)Change mount options for oracle file systems (OnLineJFS options) in accordance with recommendations http://h21007.www2.hp.com/dspp/files/unprotected/database/HP3KOracle.ppt. This sure improves the buffers utilization.
3)One should to exchange place of some files (.dbf .ctl ...) between file systems (lv and pv)so to discs were similarly busy. On present results one disc c32t8d2 is busy.
Thing which it was been possible to make EVA and methodology, which sure they will improve performance:
4) To minimize impact of snapclone performance on system performance, transition cache on source virtual disks to write-through mode using Command View EVA before starting the snapshot or snapclone. The impact of creating one snapshot may be tolerable and may not require this action.
However, creation of multiple snapshots of the same virtual disk requires this action (Doc best practices for EVA - white paper).
5)If you are also taking a backup at the same time, you may want to move it until later. The snapclone actually does a snapshot until the snapclone process has completed (copying data over).
6) Take EVAPerf and analyse situations during snap clone and without.
Before of any changes execute:
#timex dd if=/data/ALPHA/dbs02/some_big_file of=/dev/null bs=4096k count=10000&
#sar -d 5 10 > before.log
#sar -b 5 10 > before.log
and after changes make the same to compare results.
Regards LiPEnS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2006 01:51 AM
01-26-2006 01:51 AM
Re: EVA6000 Performance
Thanks for that, I couldn't find any of those documents when I was searching for them over the last couple of days.
I'm reading through them now but I cant find anywhere that says what vxfs parameters to use. You said to use
read_pref_io=1024Kb, read_nstream=4, write_nstream=1.
but I can't find references to those in any of the documents. Is this from personal experience?
I'm looking at making the mount option changes later this afternoon.
Once again, thanks for all your help on this.
Chris.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2006 05:30 PM
01-26-2006 05:30 PM
SolutionParameters of vxfs file systems describes this document http://docs.hp.com/en/B3929-90011/ch05s07.html
"Try to align the parameters to match the geometry of the logical disk. With striping or RAID-5, it is common to set read_pref_io to the stripe unit size and read_nstream to the number of columns in the stripe. For striping arrays, use the same values for write_pref_io and write_nstream, but for RAID-5 arrays, set write_pref_io to the full stripe size and write_nstream to 1."
In this case "stripe unit size" = 1024Kb so read_pref_io=1024Kb, "number of columns in the stripe" = 4 (number of disks in stripe groupe) so read_nstream=4 and "write_nstream to 1" so write_nstream=1.
This is also knowledge get on courses and my personal experience.
For improvement of efficiency it can be also most important to change place of some files (.dbf .ctl ...) between file systems (lv -> pv) so to discs were similarly busy. On present results one disc c32t8d2 is busy.
Please write what did you change and whether something improved??
Regard
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2006 08:29 PM
01-26-2006 08:29 PM
Re: EVA6000 Performance
Yes, the changes to remove CONVOSYNC=DIRECT and to add NODATAINLOG did speed things up.
The job that used to take 15 minutes is now only taking 30 minutes and not over an hour.
That has given me enough of a performance increase to buy me some time to test out the other settings.
I'll try the vxfs tuning next
Once again thanks for your help.
Chris.