- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- MSCP Disk Sharing
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2019 12:29 PM
тАО12-09-2019 12:29 PM
I am migrating a physical cluster (VMS 6.2), to a virtual cluster. All of my data currently is on HSJ-served disks. I would like to migrate the data via MSCP. Currently, only one of my nodes is acting as an MSCP server.
All five nodes have MSCP_SERVE_ALL=1, and MSCP_LOAD=1. My HSJs have ALLOCLASS=1. One of my five nodes also has ALLOCLASS=1. The remaining nodes all have ALLOCLASS=0. The node with ALLOCLASS=1, is MSCP serving all of the disks to the NI members. None of the other nodes are MSCP serving any disks.
What is the proper setting to allow MSCP serving from all nodes? Should all of my physical nodes have ALLOCLASS=1? Is this safe? if they begin MSCP serving, will I have the ability to migrate data through multiple MSCP paths concurrently?
Any help is much appreciated!
Craig
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-09-2019 10:26 PM
тАО12-09-2019 10:26 PM
Re: MSCP Disk Sharing
Craig,
is it correct to assume, that only the node with ALLOCLASS=1 is connected to the CI and accesses the disks directly via the HSJs ? And all the other 5 nodes access the HSJ disks via that cluster member (the MSCP server) via the NI (LAN) ?
If my assumptions are correct, your network will most likely be the bottleneck during data migration. Trying to enable MSCP-serving on the NI members won't buy you much. If those 'NI members' have no CI connection to the HSJs, they can't serve those HSJ disks.
If you happen to have OpenVMS shadowing running, you might be able to migrate the data from the HSJ disks by adding the disks from your emulated system(s) as additional members to the shadowsets of your HSJ disks. Then you can migrate the data during the running system and save lots of downtime for data migration.
Volker.
Disclaimer: I've done more than 100 emulations/migrations of hardware VAX and Alpha systems/clusters to emulated systems/clusters (CHARON/VAX and CHARON/AXP) during the last 15 years, so I have a little bit experience in that area
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2019 05:53 AM
тАО12-10-2019 05:53 AM
Re: MSCP Disk Sharing
physical/CI nodes: p1-p4. The physical nodes are ALL physically attached to
the HSJs.
[v1] [v2] [v3] [v4]
====network=================
[p1] [p2] [p3] [p4]
===== CI ===============
[hsj-1] [hsj..N]
Currently, for no intentional reason, p1 has an alloclass=1. All other
physical nodes have alloclass=0. p1 is the only node acting as an mscp
server. Would setting all physical nodes to alloclass=1 be the proper thing
to do? And would this allow multiple simultaneous shadow merges?
Thanks!
Craig
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2019 06:16 AM - edited тАО12-10-2019 06:19 AM
тАО12-10-2019 06:16 AM - edited тАО12-10-2019 06:19 AM
SolutionCraig,
if p1...p4 all have connections to the CI and therefore direct access to the disk on the HSJs, then setting ALLOCLASS=1 on them might be a good idea - whyever this has not been done in the past.
Watch out for local disks attached to p1..p4 with the SAME names, e.g. a p2$DKA400: and a p3$DKA400: would be different now, but would have the same cluster-wide name $1$DKA400:, if both nodes have ALLOCLASS=1. No problem, if you don't mount those disks, but if you need to mount them, you'll need to change the unit-ids first to make those device names unique in the cluster. AFAIK there is no port allocation class mechanism in V6.2.
Also read about and build the PREFER tool (see SYS$EXAMPLES:PREFER.MAR). I'm not sure, if there is any kind of load balancing between the MSCP client (your node v1...v4) and the MSCP servers (p1..p4) in V6.2 You could use the PREFER tool to point the remote disk unit on the MSCP clients to the desired MSCP server BEFORE mounting the shadowset members on the emulated systems v1...v4 into the shadowsets. And you'll get shadow-COPIES not shadow-MERGES to the members on v1...v4.
Assuming your network connections on the p1...p4 systems are 100 Mbit/sec, you might get a throughput of up to 10 GB/hr from each pn MSCP server during shadow-copies. Is your network going to handle this ? And if you can do the disk migration with shadowing, just take your time (start with 1 or 2 shadow-copies, don't run 10 shadow-copies at the same time - also beware of SHADOW_MAX_COPY !). Use MONITOR SCS to view the Kbytes Map Rate between the systems.
Good luck,
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2019 07:17 AM
тАО12-10-2019 07:17 AM
Re: MSCP Disk Sharing
p1..p4 have NO local storage. So, I plan to set their alloclass=1 today. I also max shadow_max_copy=1.
I will investigate the PREFER. That sounds perfect. I was going to try to force it by dismounting from other nodes.
My network is 10/100/half. I copy about 1.25 G/hr. Most of this will be done during a user holiday.
monitor scs/mscp seem useful. But I have no idea what numbers would suggest something bad? I have no idea what my limits are.
Thanks so much for your expertise!
Craig
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2019 07:40 AM
тАО12-10-2019 07:40 AM
Re: MSCP Disk Sharing
Craig,
dismounting disks on the MSCP servers won't 'force' clients use those MSCP servers. Use $ PREFER $1$DUAx:/HOST=pn on the MSCP client, before mounting a local disk into the shadowset, where $1$DUAx: is a member on pn. This should cause the MSCP client to use MSCP server pn when mounting that disk. You might consider to increase SHADOW_MAX_COPY on your vn nodes to increase that chances, that the actual shadow-copy runs in the SHADOW_SERVER processes on those nodes.
You can use MONI SCS to just view the KB Mapped Rate from/to the various nodes.. If any path in your network between the px and vx servers is 10 Mbit/Half, you won't get more than 1250 KB/sec between any of those 2 classes of nodes.
Volker.