- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Migrate data from EMC to N Series
HPE EVA Storage
1753882
Members
7157
Online
108809
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-03-2011 01:45 AM
тАО04-03-2011 01:45 AM
Hello guys,
I am planning a migration between two storage systems.
Emc clarion to IBM N Series
I m Using HP UX servers
Now i Hav one path for each storage machine
the easiest ways i think is to create a mirror between the volume and then break the mirror and add the 2 nd N series path
The problem is that im not familiar with HP UX Systems for now i have onlu the two volume visible...could you please help and indicate me a pdf or the command to do it?
I am planning a migration between two storage systems.
Emc clarion to IBM N Series
I m Using HP UX servers
Now i Hav one path for each storage machine
the easiest ways i think is to create a mirror between the volume and then break the mirror and add the 2 nd N series path
The problem is that im not familiar with HP UX Systems for now i have onlu the two volume visible...could you please help and indicate me a pdf or the command to do it?
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2011 06:20 AM
тАО04-04-2011 06:20 AM
Solution
Hi Raef,
That can be the easiest way. Be aware that NetApp and CX volume size calculations are different (NetApp uses 1024xMB = GB, CX uses 1000xMB = GB). As long as your LUNs are the same size or slightly larger, you shouldn't see any issues.
If you have a MirrorDisk/UX license, go for it. If not, you can also use pvmove. The only concern with pvmove is that at the write commits, if there were a host failure ast that exact moment, you could have data loss.
Identify the new LUNs (ioscan -fnC disk) find your LUNs. You can determine what they are part of (or not part of (with pvdisplay -v /dev/dsk/ctd.. | more)
pvcreate the new LUN (pvcraete /dev/rdsk/ctd).
vgextend /dev/vgXX /dev/dsk/ctd, where XX is the VG you want to extend to and ctd is the path of the new disk.
lvextend -m 1 /dev/vgXX/lvolY /dev/dsk/ctd, where XX is your VG and Y is the name of your lvol to mirror, ctd is your NetApp disk.
You can monitor the mirroring with lvdisplay -v, note that one column has blocks, the other doesn't, as it completes mirroring, the other column will fill up.
Once done, lvreduce -m 0 /dev/vgXX/lovlY /dev/dsk/ctd - where ctd is the CX disk.
All done.
Couple of things to note. First, You should use more than one path to your arrays/LUNs, preferably on redundant fabrics.
Second, those paths should be on different HBAs and you should be using single initiator - single target zoning on your switches. I mean two things here, first, you shouldn't share CX and NetApp LUNs on the same HBA, second, you should be using redundant HBAs to your multiple fabrics.
Third, watch your snap reserve as you are doing the expansions.
You will also be unable to assess any performance differences until the mirror is broken.
The biggest thing about this, is that it will be time consuming, but it will be online.
Good luck.
Don
That can be the easiest way. Be aware that NetApp and CX volume size calculations are different (NetApp uses 1024xMB = GB, CX uses 1000xMB = GB). As long as your LUNs are the same size or slightly larger, you shouldn't see any issues.
If you have a MirrorDisk/UX license, go for it. If not, you can also use pvmove. The only concern with pvmove is that at the write commits, if there were a host failure ast that exact moment, you could have data loss.
Identify the new LUNs (ioscan -fnC disk) find your LUNs. You can determine what they are part of (or not part of (with pvdisplay -v /dev/dsk/ctd.. | more)
pvcreate the new LUN (pvcraete /dev/rdsk/ctd).
vgextend /dev/vgXX /dev/dsk/ctd, where XX is the VG you want to extend to and ctd is the path of the new disk.
lvextend -m 1 /dev/vgXX/lvolY /dev/dsk/ctd, where XX is your VG and Y is the name of your lvol to mirror, ctd is your NetApp disk.
You can monitor the mirroring with lvdisplay -v, note that one column has blocks, the other doesn't, as it completes mirroring, the other column will fill up.
Once done, lvreduce -m 0 /dev/vgXX/lovlY /dev/dsk/ctd - where ctd is the CX disk.
All done.
Couple of things to note. First, You should use more than one path to your arrays/LUNs, preferably on redundant fabrics.
Second, those paths should be on different HBAs and you should be using single initiator - single target zoning on your switches. I mean two things here, first, you shouldn't share CX and NetApp LUNs on the same HBA, second, you should be using redundant HBAs to your multiple fabrics.
Third, watch your snap reserve as you are doing the expansions.
You will also be unable to assess any performance differences until the mirror is broken.
The biggest thing about this, is that it will be time consuming, but it will be online.
Good luck.
Don
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2011 08:41 AM
тАО04-04-2011 08:41 AM
Re: Migrate data from EMC to N Series
Thanks DOn for the reply
Actually on each server we have 3 HBA
2 dedicated to Disks
1 to tape
All path are on cisco san switches
I Have planed to remove one path from the EVA storage
and forward it to N series on IBM 2498 switches
so it will be another san fabric.
It is impossible to have more than one path for each storage machine...
Actually on each server we have 3 HBA
2 dedicated to Disks
1 to tape
All path are on cisco san switches
I Have planed to remove one path from the EVA storage
and forward it to N series on IBM 2498 switches
so it will be another san fabric.
It is impossible to have more than one path for each storage machine...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP