- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: LVM & SAN Migration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2004 03:45 PM
тАО05-10-2004 03:45 PM
Re: LVM & SAN Migration
> am sure this process takes a huge amount of time just to copy 1TB.
Define huge?!
With a boring single fiber HBA running at a boring 100MB/sec each way this would take 3 hours. Is that huge? Yes.
However, with a decend storage setup, mutlipel HBA's, reasonable carefule copy stream layput, secure-patch and/or carefull selective presentation you shoudl be able to to a lot better. We do 350MB/sec per EVA.
Take two and a 1TB copy can be done in 1 hour.
The CX600, fully configured is rated at 1.3GB/sec. If you can feed it at antyhing close to that rate you can copy 1TB in less than 20 minutes. Mind you, you will probably need 10 hours of preparations to set everything up correctly, and verify the settings and base-performance.
This is an extremely valuable exercise to do, notably for newly attached storage. This is probably your one and only change to try and try again, making sure you have installed and configured what you paid for. I highly recommend to take the time to prove it works as expected before handing over to product. Test-copy the life, dd from /dev/zero, grab a load tool of the web and see if you can drive it to the level expected, because it might prove tricky to get all connections going at balance loads.
>and we are also striping the data (64K) across the disks in that volume group.
Yeah well, if you insist on Tiny IO's then there is minimal hope for really good throughput is there now. This may work well for a single stream copy though if that is what you want.
Enjoy!
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2004 06:20 PM
тАО05-10-2004 06:20 PM
Re: LVM & SAN Migration
The ideal situation is to use Mirrordisk/UX. Before you begin, make note of the physical volumes that are from the 3930. If you hook up both the clariion and the symmetrix upto the SAN at the same time, setup equal size logical volumes on the host and then vgextend/lvextend the existing file systems onto the new clariion disks. After this is done wait for the logical volumes to sync up and then lvreduce/vgreduce the logical volumes and physical volumes associated with the 3930.
This is the optimal way to do this (and the same way EMC PS will do it since SRDF is not supported) since no down time is required because you are simply mirroring the file systems which can be done online.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2004 07:42 PM
тАО05-10-2004 07:42 PM
Re: LVM & SAN Migration
the better way to migrate is using mirror-ux, but don't forget that the the PV size in your volume group is limited by the max PE per PV parameter, wich is setup at the vg creation.
In some cases I was able to migrate fully the data with applications online.
Johan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2004 09:21 PM
тАО05-10-2004 09:21 PM
Re: LVM & SAN Migration
The advice I will give you is that while the RAID will be on the Clarion, you can still get better performance if you use both controllers for everything, otherwise you will limit the bandwidth with all the i/o going down just 1 piece of cable.
I suggest you create two LUNS on the Clarion, each half the size of the total LUN i.e. 34Gb.
Link to each of these over a different controller e.g. one could be c4tXdY and the other c5tMdN.
Put these 2 luns in a new volume group.
Create all your logical volumes striped over these 2 LUNs. That will spread the i/o evenly over both controllers.
Alternatively you can be clever about which logical volumes go over which controller to the array. For example rollback segments down one, data down the other.
To copy the data, either use a backup/restore, or just use dd if=oldLV of=newLV bs=8192 count=NNNNNN etc.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-11-2004 01:14 AM
тАО05-11-2004 01:14 AM
Re: LVM & SAN Migration
Here is another option for you:
http://rsync.samba.org/
rsync works great at moving large amounts of files around. We just migrated a few 130GB filesystems in a few hours. It is much faster than rdist and you have better control over what is happening.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-11-2004 01:41 AM
тАО05-11-2004 01:41 AM
Re: LVM & SAN Migration
> To copy the data, either use a backup/restore, or just use dd if=oldLV of=newLV bs=8192 count=NNNNNN etc.
NO. Do not use such small blocksizes to copy 1TB.
Use bs=1024k or 4096k. At least a value larger than 128kb
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-11-2004 02:08 AM
тАО05-11-2004 02:08 AM
Re: LVM & SAN Migration
I have a concern with using a single 68GB LUN. Unless you also have PowerPath you will only be able to access that LUN on a single HBA at a time. You can present that LUN to the host on multiple HBAs but, only one will be used, your primary path. The others sit there waiting for a failure.
PowerPath will load balance between them.
Either load balance your HBAs with PowerPath or as some one else suggested use smaller LUNs, stripe your lvols and use different primary paths (HBAs) for different LUNs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2004 05:56 PM
тАО05-25-2004 05:56 PM
Re: LVM & SAN Migration
This msg is nothing to do with the subject.
This is Madhu who lost contact with a friend called Srinikesh(Nickee) S/O Rami Reddy from Rly Kodur in AP.
If you are the one I'm looking for please mail me at mbyreddy@hotmail.com
Cheers
Madhu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2004 02:37 AM
тАО05-26-2004 02:37 AM
Re: LVM & SAN Migration
Thanks a lot for sharing the ideas and the experience.
I would like to give you clear picture here.
We are using powerpath and also going to use it in CX-600.
One of the production volume group:
VG Name /dev/vgdb1
Max LV 255
Cur LV 16
Open LV 16
Max PV 255
Cur PV 8
Act PV 8
Max PE per PV 1078
VGDA 16
PE Size (Mbytes) 8
Total PE 8624
As per some of the above suggestions, I have understood the procedure in the following manner.
1. In CX600 needs to create several LUNS of size 8GB. Because MX PE is 8.2GB.
2. I cannot add these LUNs to a logical volume because the LV's are striped across the 8 disks. Which means I cannot add the LUNs and also LV mirroring is not possible.
How can we come out of this problem?
Also I heard CLARRIONs can perform well if it is configured a smaller number of huge size LUNS. But here we are talking to create many LUNs with smaller size.
Any thoughts on the above 2 problems?
Thanks & regards,
Nikee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-26-2004 12:27 PM
тАО05-26-2004 12:27 PM
SolutionYou obviously are confused.
The way to set up disk space is to:
a) Create your Lun(s) on the array
b) Incorperate these Luns into your VG.
c) Then split this single VG into lvols as you wish. with lvcreate you can make an lvol that is mirrored accross 2 or more Luns ( -m 1). You can stripe your data ( again on the LVM level ) accross all the Luns that you incorperated. You can also mirror stripe your lvols.
ie
lvcreate -s g -D y -m 1 -L60000 /dev/vg01
( search for lvcreate on this forum !)
To your last questions. ( in order):
1) phy-Extents (PE) are blocks of data only.
You can make the LUNS big. ie. IF you Have 2 LUNs of 320G each, you can make 4 lvols of 64Gig mirrored ( accross these 2 LUNS.) If you set PEs to 64Meg, you will be using 1000 PEs from each LUN. So PE is just the size of the "block" size.
When creating the VG, do vgcreate -s 64 /dev/dsk.. /dev/dsk.. to set PE to 64Meg.
2) Add LUNS to the VG, then divide the VG into your LVs. These LVs can be striped - as explained above. If you have 10 LUNs in the VG, you can strip the lvols ( LVs) accross the 10 LUNS. But the array is already doing striping for you !
I hope that makes it clear.
Isaac
Please assign points.
- « Previous
-
- 1
- 2
- Next »