Disk Enclosures
1751720 Members
4181 Online
108781 Solutions
New Discussion юеВ

EVA6000 Performance

 
SOLVED
Go to solution
Chris Evans_5
Frequent Advisor

EVA6000 Performance

Hi Folks,

I am fairly new to EVAs (as you will see from my last post on the subject) and we are having performance problems.

We have migrated our main Oracle database from an XP512 onto an EVA6000.

On the XP we had 64 luns and after advice from this forum and from other people we migrated to 4 luns on the EVA, 2 on each controller routed through to 4 fibre cards in the host (a superdome), using PV failover not securepath or anything fancy...

We have a couple of problems:
1) While a business copy snap clone is running all disk operations on the source or target areas are really rally slow (one job that takes 15 minutes when it is run on it's own takes 1 hour and 20 minutes when a snap clone is running)

2) Certain operations on the database that ran fine when the disks were on the XP now run slowly on the EVA when there is a high volume of them running concurrantly. There is nothing I can see in Oracle statistics or in Unix performance monitoring (glance) show any performance problems.

Can anyone shed any light on this please? I know this is a bit of a woolly description so if you need any more information then please ask.

Thanks in advance
Chris
15 REPLIES 15
LiPEnS
Valued Contributor

Re: EVA6000 Performance

Hi
Please post results of
#sar -d 5 10
when:
1. business copy snap clone is running
2. business copy snap clone is running and other job is running (this: one job that takes 15 minutes when it is run on it's own takes 1 hour and 20 minutes when a snap clone is running)
3. system is idle,
and result of:
#lvdisplay /dev/vg_name/lvol_name
#fstype -v /dev/vg_name/lvol_name
#fcmsuutil /dev/td_number_of_HBA
and size of 4 luns??

Regards
LiPEnS
LiPEnS
Valued Contributor

Re: EVA6000 Performance

sorry
fstyp and fcmsutil
Sheriff Andy
Trusted Contributor

Re: EVA6000 Performance

Chris I am not so sure if this is what you are asking,

We went from a XP512 to an EVA5k. We do an awful lot of snap clones (over 2TB currently for 2 databases).

We start our deletes of the 2TB database at 2PM, and usually after 6PM the deletes are usually complete. The same process on the XP512 took an hour and a half to 2 hours.

I thought that the EVA6K was equipped with 3 phase snap creation. You might want to look into that.
Peter Mattei
Honored Contributor

Re: EVA6000 Performance

When do you see the performance hits?
When you create snapclones or when you delete them?

Please tell us!

Note: The creation should take only seconds!
And, you should almost never need to DELETE a clone; you can just EMPTY and reuse them!!
This also should only take between seconds and minutes!


Cheers
Peter
I love storage
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

LiPEnS,

Each of the LUNs is 300GB

When the snap clone runs again tonight I will get you the results from sar... I can't get them from the job on it's own with no snap clone as it is a live service and I can't run the job stand alone.

The idle figures are...
HP-UX wlux01 B.11.11 U 9000/800 01/24/06



07:32:30 device %busy avque r+w/s blks/s avwait avserv

07:32:35 c0t13d0 8.00 0.50 11 48 4.11 14.38

c0t3d0 2.20 0.50 2 27 3.56 10.85

c3t13d0 6.00 0.50 9 38 4.39 10.55

c3t3d0 1.20 0.50 2 24 3.13 9.88

c26t8d0 28.40 0.50 64 1030 5.26 4.44

c28t8d1 27.40 0.50 103 1654 4.94 2.70

c31t8d2 22.20 0.50 104 1664 4.90 2.13

c33t8d3 11.80 0.50 81 1296 4.82 1.49

07:32:40 c0t13d0 3.80 0.50 5 30 4.43 13.16

c0t3d0 3.20 0.50 4 50 4.87 9.45

c3t13d0 2.20 0.50 4 24 4.65 11.25

c3t3d0 2.20 0.50 3 48 4.56 7.34

c26t8d0 28.80 0.50 197 3157 4.93 1.53

c28t8d1 21.00 0.50 146 2342 4.78 1.58

c31t8d2 20.40 0.50 112 1795 4.99 1.77

c33t8d3 19.00 0.50 119 1904 5.03 1.60

07:32:45 c0t13d0 3.80 0.50 5 35 3.92 14.34

c0t3d0 2.00 0.50 3 34 3.17 11.35

c3t13d0 2.00 0.50 3 26 3.94 11.48

c3t3d0 0.80 0.50 2 32 3.44 6.17

c26t8d0 21.80 0.50 117 1872 5.07 2.02

c28t8d1 21.60 0.50 93 1488 4.71 2.32

c31t8d2 23.60 0.50 97 1552 4.82 2.42

c33t8d3 25.80 0.50 77 1229 4.54 3.39

07:32:50 c0t13d0 3.60 0.50 5 30 3.22 14.49

c0t3d0 2.00 0.50 3 31 4.02 10.53

c3t13d0 2.20 0.50 4 24 3.30 13.80

c3t3d0 1.40 0.50 2 29 4.18 8.84

c26t8d0 24.00 0.50 167 2653 5.06 1.49

c28t8d1 28.40 0.50 128 2042 4.65 2.27

c31t8d2 18.60 0.50 123 1965 5.17 1.63

c33t8d3 23.80 0.50 178 2842 5.05 1.38

07:32:55 c0t13d0 6.19 0.50 9 57 4.06 17.14

c0t3d0 2.20 0.50 3 37 3.82 9.78

c3t13d0 3.59 0.50 6 46 4.34 10.33

c3t3d0 1.60 0.50 3 36 3.45 7.97

c26t8d0 24.15 0.50 79 1260 5.52 3.02

c28t8d1 29.74 0.50 81 1290 4.95 3.85

c31t8d2 26.95 0.50 67 1073 5.09 4.16

c33t8d3 11.78 0.50 50 802 4.67 2.44

07:33:00 c0t13d0 2.20 0.50 3 23 3.93 11.49

c0t3d0 1.60 0.50 2 34 3.25 8.60

c3t13d0 1.80 0.50 2 18 4.13 11.16

c3t3d0 1.40 0.50 2 34 2.86 8.85

c33t0d4 0.20 0.50 0 6 5.66 0.37

c26t8d0 44.80 0.50 64 1022 4.93 6.82

c28t8d1 32.40 0.50 46 730 4.60 7.56

c31t8d2 19.00 0.50 26 410 5.30 7.33

c33t8d3 3.20 0.50 3 42 5.87 10.82

07:33:05 c0t13d0 6.41 0.50 9 50 3.94 15.77

c0t3d0 23.25 0.50 42 105 5.15 6.81

c3t13d0 4.41 0.50 6 39 4.10 11.64

c3t3d0 20.04 0.50 42 103 5.15 4.69

c26t8d0 26.25 0.50 141 2264 5.15 1.80

c28t8d1 15.23 0.50 131 2097 5.01 1.15

c31t8d2 23.65 0.50 163 2607 5.18 1.37

c33t8d3 31.66 0.50 172 2748 5.28 1.85

07:33:10 c0t13d0 1.80 0.50 2 13 4.42 12.54

c0t3d0 0.80 0.50 3 34 3.14 6.59

c3t13d0 1.20 0.50 1 10 4.19 13.17

c3t3d0 0.80 0.50 2 31 3.56 5.96

c26t8d0 22.55 0.50 149 2374 4.91 1.58

c28t8d1 16.97 0.50 153 2456 5.16 1.09

c31t8d2 34.33 0.50 143 2283 4.83 2.42

c33t8d3 21.16 0.50 144 2296 5.07 1.45

07:33:15 c0t13d0 1.60 0.50 2 14 3.48 9.65

c0t3d0 1.00 0.50 1 19 2.96 10.42

c3t13d0 0.60 0.50 1 10 4.91 7.22

c3t3d0 0.60 0.50 1 19 2.97 7.55

c26t8d0 32.26 0.50 204 3259 5.07 1.62

c28t8d1 15.83 0.50 143 2296 4.93 1.10

c31t8d2 26.85 0.50 139 2232 4.99 1.94

c33t8d3 20.24 0.50 167 2671 4.93 1.22

07:33:20 c0t13d0 1.40 0.50 2 15 4.47 9.07

c0t3d0 2.20 0.50 3 39 2.50 8.81

c3t13d0 1.00 0.50 1 12 4.67 8.87

c3t3d0 1.60 0.50 3 37 2.63 7.71

c26t8d0 14.60 0.50 29 461 4.87 5.56

c28t8d1 19.40 0.50 47 758 5.27 3.89

c31t8d2 34.40 0.50 58 922 5.17 5.88

c33t8d3 30.80 0.50 40 643 5.01 7.50



Average c0t13d0 3.88 0.50 5 32 4.00 14.27

Average c0t3d0 4.04 0.50 7 41 4.57 7.77

Average c3t13d0 2.50 0.50 4 24 4.22 11.11

Average c3t3d0 3.16 0.50 6 39 4.60 5.69

Average c26t8d0 26.76 0.50 121 1935 5.06 2.25

Average c28t8d1 22.80 0.50 107 1715 4.90 2.18

Average c31t8d2 25.00 0.50 103 1650 5.02 2.42

Average c33t8d3 19.92 0.50 103 1647 5.00 1.94

Average c33t0d4 0.02 0.50 0 1 5.66 0.37




lvdisplay gives us:
--- Logical volumes ---

LV Name /dev/vgalpha/alp01

VG Name /dev/vgalpha

LV Permission read/write

LV Status available/syncd

Mirror copies 0

Consistency Recovery MWC

Schedule striped

LV Size (Mbytes) 76288

Current LE 9536

Allocated PE 9536

Stripes 4

Stripe Size (Kbytes) 1024

Bad block on

Allocation strict

IO Timeout (Seconds) default



--- Logical volumes ---

LV Name /dev/vgalpha/alp02

VG Name /dev/vgalpha

LV Permission read/write

LV Status available/syncd

Mirror copies 0

Consistency Recovery MWC

Schedule striped

LV Size (Mbytes) 76288

Current LE 9536

Allocated PE 9536

Stripes 4

Stripe Size (Kbytes) 1024

Bad block on

Allocation strict

IO Timeout (Seconds) default



And so on├в ┬ж there are 16 file systems├в ┬ж.



fstyp shows:
wlux01 / # fstyp -v /dev/vgalpha/alp01

vxfs

version: 4

f_bsize: 8192

f_frsize: 8192

f_blocks: 9764864

f_bfree: 3551250

f_bavail: 3551245

f_files: 200

f_ffree: 168

f_favail: 168

f_fsid: 1074790401

f_basetype: vxfs

f_namemax: 254

f_magic: a501fcf5

f_featurebits: 0

f_flag: 16

f_fsindex: 7

f_size: 9764864

wlux01 / # fstyp -v /dev/vgalpha/alp02

vxfs

version: 4

f_bsize: 8192

f_frsize: 8192

f_blocks: 9764864

f_bfree: 3854234

f_bavail: 3854229

f_files: 200

f_ffree: 168

f_favail: 168

f_fsid: 1074790402

f_basetype: vxfs

f_namemax: 254

f_magic: a501fcf5

f_featurebits: 0

f_flag: 16

f_fsindex: 7

f_size: 9764864


and so on for the 16 file systems




fcms util for the 4 cards shows:

Vendor ID is = 0x001077

Device ID is = 0x002312

PCI Sub-system Vendor ID is = 0x00103c

PCI Sub-system ID is = 0x0012ba

PCI Mode = PCI 66 MHz

ISP Code version = 3.3.150

ISP Chip version = 3

Topology = PTTOPT_FABRIC

Link Speed = 2Gb

Local N_Port_id is = 0x0c0400

Previous N_Port_id is = None

N_Port Node World Wide Name = 0x50060b0000325f3b

N_Port Port World Wide Name = 0x50060b0000325f3a

Switch Port World Wide Name = 0x2001000dec06e4c0

Switch Node World Wide Name = 0x2001000dec06e4c1

Driver state = ONLINE

Hardware Path is = 4/0/4/0/1

Maximum Frame Size = 2048

Driver-Firmware Dump Available = NO

Driver-Firmware Dump Timestamp = N/A

Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x

x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:

27:37



Vendor ID is = 0x001077

Device ID is = 0x002312

PCI Sub-system Vendor ID is = 0x00103c

PCI Sub-system ID is = 0x0012ba

PCI Mode = PCI 66 MHz

ISP Code version = 3.3.150

ISP Chip version = 3

Topology = PTTOPT_FABRIC

Link Speed = 2Gb

Local N_Port_id is = 0x0c0400

Previous N_Port_id is = None

N_Port Node World Wide Name = 0x50060b00002df16d

N_Port Port World Wide Name = 0x50060b00002df16c

Switch Port World Wide Name = 0x2002000dec06e480

Switch Node World Wide Name = 0x2001000dec06e481

Driver state = ONLINE

Hardware Path is = 6/0/4/0/0

Maximum Frame Size = 2048

Driver-Firmware Dump Available = NO

Driver-Firmware Dump Timestamp = N/A

Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x

x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:

27:37



Vendor ID is = 0x001077

Device ID is = 0x002312

PCI Sub-system Vendor ID is = 0x00103c

PCI Sub-system ID is = 0x0012ba

PCI Mode = PCI 66 MHz

ISP Code version = 3.3.150

ISP Chip version = 3

Topology = PTTOPT_FABRIC

Link Speed = 2Gb

Local N_Port_id is = 0x0c0300

Previous N_Port_id is = None

N_Port Node World Wide Name = 0x50060b00002df16f

N_Port Port World Wide Name = 0x50060b00002df16e

Switch Port World Wide Name = 0x2002000dec06e4c0

Switch Node World Wide Name = 0x2001000dec06e4c1

Driver state = ONLINE

Hardware Path is = 6/0/4/0/1

Maximum Frame Size = 2048

Driver-Firmware Dump Available = NO

Driver-Firmware Dump Timestamp = N/A

Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x

x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:

27:37



Vendor ID is = 0x001077

Device ID is = 0x002312

PCI Sub-system Vendor ID is = 0x00103c

PCI Sub-system ID is = 0x0012ba

PCI Mode = PCI 66 MHz

ISP Code version = 3.3.150

ISP Chip version = 3

Topology = PTTOPT_FABRIC

Link Speed = 2Gb

Local N_Port_id is = 0x0c0500

Previous N_Port_id is = None

N_Port Node World Wide Name = 0x50060b0000325f39

N_Port Port World Wide Name = 0x50060b0000325f38

Switch Port World Wide Name = 0x2001000dec06e480

Switch Node World Wide Name = 0x2001000dec06e481

Driver state = ONLINE

Hardware Path is = 4/0/4/0/0

Maximum Frame Size = 2048

Driver-Firmware Dump Available = NO

Driver-Firmware Dump Timestamp = N/A

Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23x

x Driver B.11.11.04 /ux/kern/kisu/FCD/src/common/wsio/fcd_init.c:Sep 23 2004,18:



Thanks again for your help.

Chris.



Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

Sheriff Andy,

We don't delete our snap clones - just convert them to containers, this takes a few seconds to do.

The problem we see is after the snap clone has been created and presented to a host it continues to build in the background. This process seems to get more time allocated to it than the host requests. This process takes about 2 hours.

Chris.
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

Peter,

You are right we don't delete the snap clones, just convert them to containers.

The snap clone creation to a point where we can present it to a host does take seconds but it then carries on building the snap clone in the background. But it the copy process seems to get a higher priority than any host requests.

Chris.
LiPEnS
Valued Contributor

Re: EVA6000 Performance

Hi again,
sar -d show that local disk (cXt13d0 and cXt3d0) go correctly and avserv > avwait. For luns on EVA avserv < avwait and this show that on this disk is a bottleneck.
Please, post more info
#scsictl -a /dev/rdsk/cXtYdZ_luns_on_eva
#vxtunefs -p /mont_point_of_eva_filesystems
#mount -v
#swlist |grep -i OnLineJFS
database on raw device/file systems ??
4 luns belong to one vg with stripes??
which level of protection/RAID on EVA??
how many disk group on EVA??
which version of CV EVA (command view)??

Regards
Chris Evans_5
Frequent Advisor

Re: EVA6000 Performance

Hi LiPEnS,

HEre is the sar information you wanted...

Firstly with a business copy running in the background on the eva and minimal work on the host:
HP-UX wlux01 B.11.11 U 9000/800 01/24/06



18:29:13 device %busy avque r+w/s blks/s avwait avserv

18:29:18 c0t13d0 3.40 0.50 5 28 4.26 10.12

c0t3d0 2.00 0.50 2 34 3.18 11.64

c3t13d0 2.60 0.50 3 21 4.92 9.62

c3t3d0 1.40 0.50 2 34 3.32 11.01

c33t8d3 2.60 0.50 8 128 5.40 3.03

18:29:23 c0t13d0 3.39 0.50 5 21 4.22 10.45

c0t3d0 3.39 0.50 4 40 4.46 10.11

c3t13d0 2.99 0.50 4 17 4.31 9.37

c3t3d0 2.79 0.50 3 36 4.75 9.53

c33t8d3 0.20 0.50 8 128 4.96 0.48

c32t8d2 0.20 0.50 0 3 5.87 13.97

18:29:28 c0t13d0 3.40 0.50 4 22 4.37 11.30

c0t3d0 0.80 0.50 1 16 3.07 12.47

c3t13d0 2.40 0.50 3 17 4.69 10.62

c3t3d0 0.60 0.50 1 16 3.08 7.11

c33t8d3 0.20 0.50 7 115 4.29 0.30

c32t8d2 0.20 0.50 0 3 7.51 12.39

18:29:33 c0t13d0 5.20 0.50 9 46 4.56 12.89

c0t3d0 2.80 0.50 4 38 4.04 8.54

c3t13d0 4.00 0.50 7 40 4.77 10.87

c3t3d0 2.00 0.50 3 36 3.67 9.13

18:29:38 c0t13d0 4.21 0.50 5 32 4.14 12.57

c0t3d0 4.41 0.50 5 44 3.68 12.94

c3t13d0 2.61 0.50 4 26 4.04 13.12

c3t3d0 1.80 0.50 4 38 4.01 9.35

c27t0d2 0.20 0.50 1 18 2.88 1.66

c33t8d3 0.40 0.50 8 131 5.02 0.30

c32t8d2 0.20 0.50 0 3 3.07 10.78

18:29:43 c0t13d0 4.19 0.50 5 26 4.25 10.99

c0t3d0 1.40 0.50 2 34 3.18 10.35

c3t13d0 2.99 0.50 4 22 4.13 10.30

c3t3d0 1.20 0.50 2 34 3.32 8.32

c26t8d0 0.20 0.50 0 6 9.59 0.24

18:29:48 c0t13d0 2.61 0.50 4 20 3.18 12.30

c0t3d0 1.20 0.50 2 34 2.68 8.30

c3t13d0 2.21 0.50 3 16 3.35 12.61

c3t3d0 1.20 0.50 2 34 2.69 8.47

c33t8d3 0.20 0.50 8 138 5.01 0.31

c32t8d2 0.60 0.50 0 3 5.84 26.91

18:29:53 c0t13d0 7.19 0.50 8 32 5.20 9.66

c0t3d0 3.79 0.50 5 32 3.84 11.52

c3t13d0 4.79 0.50 6 22 5.53 7.34

c3t3d0 2.00 0.50 2 21 4.56 10.52

c33t8d3 1.40 0.50 8 141 5.47 1.62

c32t8d2 0.60 0.50 0 3 9.19 26.36

18:29:58 c0t13d0 1.60 0.50 2 10 4.44 9.98

c0t3d0 2.59 0.50 3 40 3.36 10.53

c3t13d0 1.40 0.50 1 8 4.61 10.94

c3t3d0 1.60 0.50 3 37 3.01 10.25

c33t8d3 0.40 0.50 8 131 5.73 0.31

18:30:03 c0t13d0 43.20 0.50 73 507 5.20 43.96

c0t3d0 47.20 0.50 68 322 5.11 20.27

c3t13d0 34.80 0.50 65 469 5.16 43.06

c3t3d0 33.60 0.50 62 295 5.18 12.72

c26t0d1 0.20 0.50 11 131 4.54 0.25

c27t0d2 0.60 0.50 2 29 4.18 3.06

c26t8d0 0.20 0.50 1 11 5.78 0.22

c33t8d3 1.00 0.50 10 170 5.10 0.94

c32t8d2 0.40 0.50 0 3 0.08 22.40



Average c0t13d0 7.84 0.50 12 74 4.89 31.24

Average c0t3d0 6.96 0.50 10 63 4.68 17.43

Average c3t13d0 6.08 0.50 10 66 4.96 31.63

Average c3t3d0 4.82 0.50 8 58 4.79 11.83

Average c33t8d3 0.64 0.50 6 108 5.13 0.91

Average c32t8d2 0.22 0.50 0 2 5.26 18.80

Average c27t0d2 0.08 0.50 0 5 3.68 2.52

Average c26t8d0 0.04 0.50 0 2 6.87 0.23

Average c26t0d1 0.02 0.50 1 13 4.54 0.25