Operating System - HP-UX
1752316 Members
5676 Online
108786 Solutions
New Discussion

Re: Shared LUNS across different partitions.

 
jerrym
Trusted Contributor

Shared LUNS across different partitions.

Is it normal to have LUNS split between different partitions?? Note disk1534.  Does this cause a performace issue?

Have several partitions like this.

 

 

LV Name                     /dev/vgRAP2/lvsapdata4
VG Name                     /dev/vgRAP2
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               0
Consistency Recovery        MWC
Schedule                    parallel
LV Size (Mbytes)            276160
Current LE                  4315
Allocated PE                4315
Stripes                     0
Stripe Size (Kbytes)        0
Bad block                   on
Allocation                  strict
IO Timeout (Seconds)        default

   --- Distribution of logical volume ---
   PV Name                 LE on PV  PE on PV
   /dev/disk/disk1533      8         8
   /dev/disk/disk1534      4307      4307

   --- Logical extents ---
   LE    PV1                     PE1   Status 1

--- Logical volumes ---
LV Name                     /dev/vgRAP2/lvsapdata5
VG Name                     /dev/vgRAP2
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               0
Consistency Recovery        MWC
Schedule                    parallel
LV Size (Mbytes)            276160
Current LE                  4315
Allocated PE                4315
Stripes                     0
Stripe Size (Kbytes)        0
Bad block                   on
Allocation                  strict
IO Timeout (Seconds)        default

   --- Distribution of logical volume ---
   PV Name                 LE on PV  PE on PV
   /dev/disk/disk1534      8         8
   /dev/disk/disk1535      4307      4307

5 REPLIES 5
Matti_Kurkela
Honored Contributor

Re: Shared LUNS across different partitions.

One of the reasons of using LVM is that you are not restricted by the limits of physical disks/LUNs, so this is exactly what LVM is designed to do.

 

There should be no meaningful performance issue, as LVM must map each disk access from logical to physical extents anyway - this mapping is very simple and can be implemented very fast indeed.

 

With physical disks, having multiple disks for a single logical volume can even improve random-access performance, as the read-write heads of one disk mechanism can be satisfying one request while another request for a different part of the LV can go to another disk mechanism, allowing both requests to be run in parallel.

 

With modern enterprise SANs, this tends to be irrelevant because LUNs are virtual constructs of the storage system: your "single LUN" will in fact be a slice of some RAID array, whose number of disks, striping and other parameters are decided by the SAN administrator and/or the auto-tuning software of the storage system.

 

But the fact that you have several LVs with 8 extents on one PV and the rest on another suggests there might be something else. Maybe the system was previously arranged to have exactly one-LV-to-one-PV mapping (maybe because someone wanted to follow some storage optimization guidelines designed for old-style disk arrays), but then one LV had to be extended by 8 extents, and this went unnoticed in one SAN migration, causing the mapping to go out of sync when moving data from the old storage to the new one.

 

 

(Does this system have any unsolved performance degradation issues associated with a storage migration or other major SAN change? If not, it's probably harmless - if it offends your sense of symmetry, remember to fix it the next time your storage system is replaced with a bigger one, typically within the next 3 years.)

 

Or maybe there is some frequently-updated application metadata in the last 8 extents of each LV, and placing it to a different PV is a way to optimize the performance if using old-style disk arrays. (Again, this may be unimportant with modern storage systems, but sometimes old performance optimization instructions are followed without checking their applicability...)

 

Or maybe there is some other obscure data alignment issue, and the 8-extent offset just happens to ensure correct alignment. If that's true, there is probably some performance tuning guide from SAP or the storage manufacturer that explains exactly what is going on. Such information tends to become obsolete by storage hardware or firmware changes, so when you migrate to newer storage, check for updates in the performance tuning guides too. Don't add unnecessary complexity by rigorously following outdated practices unless you absolutely have to.

 

MK
jerrym
Trusted Contributor

Re: Shared LUNS across different partitions.

"Does this system have any unsolved performance degradation issues associated with a storage migration or other major SAN change?"

 

Matti, you are correct.  Since we migrated several servers off old EMC storage to new EMC VMAX storage, we are getting major performance issues. 100% disk usage. This is causing delays in responses at client interfaces.  CPU, memory usage is low. Running gpm shows Network and Disks bottlenecks on the three most important servers I started looking at. Network bottlenecks I believe are being caused by backups. They could be using the wrong network interface for that and causing slow performance on the production network interface. But they have been doing these backups like this for a long time. Still checking. 

 

At this point I do not know what the Storage Group has setup as Raid type. Found a post about using Raid 1 and 5 for read write and that setting up correct Raids makes a big difference in data access performance. Looked at kernel pams and found that one server has filecache_min/max set high 10 gig max and others set low 2 gig for min and max 1% for both and none are set to default(let system decide how to use resources) . The one server set high does not seem to have a high(100%) disk usage all the time like other two. Dose this parm affect disk I/O?  I am still looking as to what I can do to improve performance of the server side. I have in the past had problems with new HBA fc cards that were not compatible with server MP firmware that affected NFS. You would not think an HBA card would interfere with NFS, but it did. When I upgraded the system firmware the NFS up/down problem went away.

 

I just don't know if it's a compatibility problem with firmware or OS patches or other. Still digging for answers. If there are some tools out there to troubleshoot this type of problem then please advise.

 

I have a call out for HP support but have not heard back from them.

 

 

Ken Grabowski
Respected Contributor

Re: Shared LUNS across different partitions.

A simple reply to the original question is yes. When you create logical volumes they may cross physical volumes.  If you extend a logical volume later that extent may be on a different physical volume than the rest of the logical volume.  It could affect performance, but when storage is being provided by a SAN it probably has no effect. You can change how the data is arranged on LUN's by using the lvmove and pvmove commands, but I don't think that will affect your performance issue.

 

Regarding the DMX to vMax migration, as I said Friday on your other post, you are not providing enough information to advise you. Most of what your focusing on is probably not what is effecting you.  The vMax has three types of storage disk that can be setup, SATA, Fiber SCSI, and Solid State.  RAID sets can be defined 1 through 6.  There are a few different allocation methods that can be chosen, and the array could be using the FAST software to move data hot spots from lower to higher performance devices inside the array.

 

All of your configuration has to be considered when determining why your having a bottleneck.  What type of disk where you on in the DMX vs. the vMax. What RAID configuration?  How many Front end controllers and HBA's between then and now. Proper configuration of PowerPath, if used. 

 

Before your migration, all your servers should have been updated to the latest firmware patches and versions of PowerPath.  Hopefully that was done.  You should have also created base line number to compare between DMX disk performance and vMax.  You need to be engaged and working closely with your EMC and Storage teams to identify what the issue is.

 

Just taking a guess....  Probably went from RAID 5 to RAID 6, and maybe slower performance disks, and/or improperly configured PowerPath no longer providing parallel paths to the disk.

jerrym
Trusted Contributor

Re: Shared LUNS across different partitions.

Ken, I totally agree with you here.

 

"Before your migration, all your servers should have been updated to the latest firmware patches and versions of PowerPath."

 

The storage group migrated the servers transparently. EMC contractors for serveral months to gathered data to

do the migration. Updating any patches or firmware was not in the plan and would require too much downtime

in order to do several 100 servers. But I suspect it could be firmware, patch related and disk layout on EMC.

EMC is working on the storage side to resolve whatever is causing the problem and I am working on the HP side to see if

there is something we can do there.  If I find out, I will post. I did find old non existent disks associated with new Agile disks in Glance Plus. I removed all stale devices and manually removed any old disks that do not exist anymore. But found one that rmsf indicated it was busy. ??  Was not listed in lvmtab nor can you do a diskinfo on it.  Still investigating.

 

Ken Grabowski
Respected Contributor

Re: Shared LUNS across different partitions.

I've done a few of these migrations at a couple of different companies and never saw one that was transparent.  Not even sure how you would manage that. Disk addresses invariably change because the WWN changes. There has always been a requirement to create map files, export volume groups, and then on reboot, recreate and import volume groups.  The number of servers doesn't really matter.  How did they transparently replace your HP-UX storage?