- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: stale mirror problem after snapclone
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 03:27 AM
10-11-2006 03:27 AM
stale mirror problem after snapclone
Hi !
We have two EVA 5000 storage arrays. We lvm mirror the luns on those arrays from our hpux hosts. We have snap license for just one of those arrays. I made snapclones of two luns on the array with the snap license. The luns are part of the same volume group on one of our hosts and then presented the copies to another host and imported the volume group. We have raw database files on those luns. The problem is that 'lvreduce -m 0 -k /dev/vg0#/lvol#' will remove the stale mirror on some logical volumes but not on others.
Steps taken:
On exporting host:
block database engine
take snapclone
unblock database engine
vgexport -m original_vg04.map -p -v /dev/vg04
Presented the luns to the importing host
On importing host:
ioscan -fnC disk
spmgr display
insf -e
mkdir /dev/vg04
mknod group c64 0x040000
vgchgid /dev/rdsk/c18d0t5 /dev/rdsk/c18d0t6
vgimport -m original_vg04.map /dev/vg04 /dev/dsk/c18d0t5 /dev/dsk/c18d0t6
vgchange -a y -q n /dev/vg04
lvreduce -k -m /dev/vg04/lvolname - for every lvol
after this step some lvols become syncd others are still stale:
root@refectus:/opt/informix/9.40.FC1/etc> vgdisplay -v vg04
--- Volume groups ---
VG Name /dev/vg04
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 35
Open LV 35
Max PV 16
Cur PV 4
Act PV 2
Max PE per PV 3199
VGDA 4
PE Size (Mbytes) 32
Total PE 6398
Alloc PE 4880
Free PE 1518
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/vg04/rootdbs
LV Status available/syncd
LV Size (Mbytes) 512
Current LE 16
Allocated PE 16
Used PV 1
LV Name /dev/vg04/logdbs
LV Status available/syncd
LV Size (Mbytes) 2048
Current LE 64
Allocated PE 64
Used PV 1
LV Name /dev/vg04/tmpdbs
LV Status available/syncd
LV Size (Mbytes) 6144
Current LE 192
Allocated PE 192
Used PV 1
LV Name /dev/vg04/juro
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jonk
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jkk
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jneu
LV Status available/syncd
LV Size (Mbytes) 8192
Current LE 256
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jplk
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jmedm
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jortm
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jort
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jger
LV Status available/stale
LV Size (Mbytes) 10240
Current LE 320
Allocated PE 640
Used PV 1
LV Name /dev/vg04/jlual
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jmed
LV Status available/syncd
LV Size (Mbytes) 8192
Current LE 256
Allocated PE 256
Used PV 1
LV Name /dev/vg04/domain
LV Status available/syncd
LV Size (Mbytes) 2048
Current LE 64
Allocated PE 64
Used PV 1
LV Name /dev/vg04/juroblob
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jonkblob
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jkkblob
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jneublob
LV Status available/syncd
LV Size (Mbytes) 7168
Current LE 224
Allocated PE 224
Used PV 1
LV Name /dev/vg04/jplkblob
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jmedmblob
LV Status available/syncd
LV Size (Mbytes) 6144
Current LE 192
Allocated PE 192
Used PV 1
LV Name /dev/vg04/jortmblob
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jortblob
LV Status available/syncd
LV Size (Mbytes) 8192
Current LE 256
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jmedblob
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jgerblob
LV Status available/stale
LV Size (Mbytes) 6144
Current LE 192
Allocated PE 384
Used PV 1
LV Name /dev/vg04/jlualblob
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/rootdbs_2
LV Status available/stale
LV Size (Mbytes) 1024
Current LE 32
Allocated PE 64
Used PV 1
LV Name /dev/vg04/jreuma
LV Status available/syncd
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jort_2
LV Status available/stale
LV Size (Mbytes) 3072
Current LE 96
Allocated PE 192
Used PV 2
LV Name /dev/vg04/jreumablob
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jneu_2
LV Status available/stale
LV Size (Mbytes) 2048
Current LE 64
Allocated PE 128
Used PV 1
LV Name /dev/vg04/jmed_2
LV Status available/stale
LV Size (Mbytes) 3072
Current LE 96
Allocated PE 192
Used PV 1
LV Name /dev/vg04/jonkblob_2
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jber_psyk_os
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
LV Name /dev/vg04/jber_psyk_osblob
LV Status available/stale
LV Size (Mbytes) 4096
Current LE 128
Allocated PE 256
Used PV 1
--- Physical volumes ---
PV Name /dev/dsk/c18t0d5
PV Status available
Total PE 3199
Free PE 1518
Autoswitch On
PV Name /dev/dsk/c18t0d6
PV Status available
Total PE 3199
Free PE 0
Autoswitch On
Can somebody help me with this ?
/attila
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 04:27 AM
10-11-2006 04:27 AM
Re: stale mirror problem after snapclone
Look in your database log and check the disk i/o for the writes to stop.
Then do a
vgdisplay -v vg04 | grep stale
if there are any stale logical volumes then type vgsync vg04
wait for the vgsync to finish - check that nothing is stale before you snapclone.
I don't quite understand how you expect to be able to vgexport a volume group of mirrored logical volumes to a single set of snapcloned LUNs and for it to work. It sounds dodgy to me.
I know you don't have to mirror tempdbspace but in your case I would do it anyway.
Check that you are vgimporting the correct disk devices, not some of the live ones because that will cause all kinds of horrible data integrity problems.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 04:49 AM
10-11-2006 04:49 AM
Re: stale mirror problem after snapclone
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 04:51 AM
10-11-2006 04:51 AM
Re: stale mirror problem after snapclone
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 08:03 PM
10-11-2006 08:03 PM
Re: stale mirror problem after snapclone
root@refectus:~> lvdisplay -v /dev/vg04/jger |head -30
--- Logical volumes ---
LV Name /dev/vg04/jger
VG Name /dev/vg04
LV Permission read/write
LV Status available/stale
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 10240
Current LE 320
Allocated PE 640
Stripes 0
Stripe Size (Kbytes) 0
Bad block on
Allocation strict
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c18t0d5 320 320
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 ??? 00000 stale /dev/dsk/c18t0d5 00000 current
00001 ??? 00001 stale /dev/dsk/c18t0d5 00001 current
00002 ??? 00002 stale /dev/dsk/c18t0d5 00002 current
00003 ??? 00003 stale /dev/dsk/c18t0d5 00003 current
00004 ??? 00004 stale /dev/dsk/c18t0d5 00004 current
00005 ??? 00005 stale /dev/dsk/c18t0d5 00005 current
00006 ??? 00006 stale /dev/dsk/c18t0d5 00006 current
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 08:10 PM
10-11-2006 08:10 PM
Re: stale mirror problem after snapclone
root@refectus:~> lvreduce -k -m 0 /dev/vg04/jger
Physical extents on remaining physical volumes are stale or
Remaining physical volumes are not responding.
lvreduce: The LVM device driver failed to reduce mirrors on
the logical volume "/dev/vg04/jger".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-11-2006 08:24 PM
10-11-2006 08:24 PM
Re: stale mirror problem after snapclone
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-13-2006 02:57 AM
10-13-2006 02:57 AM
Re: stale mirror problem after snapclone
run 'lvdisplay -v -k /dev/vg0#/lvol#' and note the pv id for example:
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 1 01474 stale 4 00000 current
00001 1 01475 stale 4 00001 current
here the stale PV id is 1
then run "lvreduce -m 0 -k /dev/vg0#/lvol# PVid'
/attila