Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 10:33 AM
02-06-2004 10:33 AM
eg
lvextend -m 1 /dev/vgy/lvol1 /dev/dsk/c1t4d0
lvsplit -s backup /dev/vgy/lvol1
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 10:37 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 10:41 AM
02-06-2004 10:41 AM
Re: lvsplit
Yes. The logical volume should have been mirrored before you could do lvsplit. It is a good idea to ensure that there are no stale extents by doing a "lvdisplay -v /dev/vgy/lvol1"
Your command will split the mirror pair with a copy called '/dev/vgy/lvol1backup' from the lvol1 mirror.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 10:45 AM
02-06-2004 10:45 AM
Re: lvsplit
here is the scenarios
vgextend /dev/vgx /dev/disk/newdisk
vgdisplay -v : has both /dev/disk/newdisk and /dev/disk/old disk
lvextend -m 1 /dev/vgx/lvol1 /dev/dks/newdisk
lvsplit -s backup /dev/vgxlvol1 /dev/dks/olddisk
can I do that??
so now i have two copies of the logical volume at two different devices... incase
something happen to the new disk I will have the copy of the old one
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 11:13 AM
02-06-2004 11:13 AM
Re: lvsplit
lvdisplay /dev/vgy/lvol1
If the Allocation says strict or PVG-strict, then the mirror is already on a different physical volume. Take a look at the lvcreate manpage (-s option) for the allocation policy definitions.
You cannot use the lvsplit command to specify a disk for your mirror. lvsplit simply splits a pre-existing mirror-pair.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-06-2004 11:15 AM
02-06-2004 11:15 AM
Re: lvsplit
Yes and No
Yes, because that is what lvsplit will actually do. Your mirror copies should be across 2 devices anyway. When you perform the lvsplit you will have 2 copies of data on seperate disks. You would of course need to lvmerge then split to keep the data current.
The whole reason behind the mirroring though is so that if you lose a disk you have 2 copies of the same data. Plus you won't have an outage as the data will be written to the 'good' disk.
When you replace your 'bad' disk with a new one, you just
vgcfgrestore /dev/vg0# /dev/dsk/c#t#d0
vgsync /dev/vg0#
And
No, because you don't actually specify on the command line where you want to split your data to, and the command only accepts logical volumes as arguments
HTH
Steve