- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Shadowing with IP Cluster
Operating System - OpenVMS
1748027
Members
4864
Online
108757
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2011 10:15 AM
тАО05-10-2011 10:15 AM
Has anyone tried enabling disk shadowing with Clustering over IP under VMS 8.4 ?
So far our experience is that it is way too slow to be functional.
So far our experience is that it is way too slow to be functional.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2011 11:04 AM
тАО05-10-2011 11:04 AM
Re: Shadowing with IP Cluster
You might generate more useful discussion if you mention the relevant hardware involved, and other useful information (using a separate network for IPCI?, jumbo frames enabled?, MSCP-serving disks?, etc...).
Are your complaints with the speed of steady-state operation, or with copies/merges?
In general, for direct-attached storage on every cluster member, there will be little cluster traffic generated by HBVS for steady-state use, other than lock manager stuff.
As I've read somewhere before, terse questions beget terse answers.
-- Rob
Are your complaints with the speed of steady-state operation, or with copies/merges?
In general, for direct-attached storage on every cluster member, there will be little cluster traffic generated by HBVS for steady-state use, other than lock manager stuff.
As I've read somewhere before, terse questions beget terse answers.
-- Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2011 11:17 AM
тАО05-10-2011 11:17 AM
Re: Shadowing with IP Cluster
Rob:
Actually I was just looking for someone that has tried it and was just looking for a general response.
Actually I was just looking for someone that has tried it and was just looking for a general response.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-10-2011 10:23 PM
тАО05-10-2011 10:23 PM
Solution
Think about what's happening here.
HBVS does synchronous writes, ie all writes will complete before the IO operation returns completion to the user.
The limit to maximum achievable IO rate is IO latency.
Take a local EVA on direct path from a rx6600. You can typically achieve a raw disc write latency of just under 1 millisecnd - that's around 1000-1500 IOs / second, maximum, to that disc.
Add HBVS to that with multiple local direct path EVAs (say 3 way) and given how HBVS work, you'll probably achieve around 1.5 milliseconds at best, say 2 milliseconds. That's around 500 IOs/second, maximum, to that DSA device.
Now pull the EVAs apart to say 100 mile separation and add distance latency - it's easy to drop to around 100 IOs/sec max, with direct path fibre.
MSCP serving over IPCI is not direct path. You have the extra overhead of MSCP transactions and you have the extra overhead of the IP stack and the fact that an IP / ethernet network behaves completely differently to fibrechannel direct path. It's easy to double or triple the IO latency.
So, it's really easy to end up with a quite small maximum achievable IO write rate. Sometimes that's too small for your applications to cope with.
MSCP serving over IPCI will work, but, as you've found, it will have worse IO write latency than direct path fibrechannel.
More overhead = worse latency = what you think of as poor performance under your specific circumstances relative to your expectations.
Now think how that affects normal operations (which are generally small block count individual IO writes) and shadow copy / merge operations (which sensibly try to group up IOs into large block count IO writes) - and the effect hat has on both the latency and throughput. Latency governs IO response and thus IO rate, bandwidth (and things like transmit window size in the network) govern IO throughput.
This stuff is complex under the hood with all the different effects at the different levels of the assortment of protocol stacks and drivers - a lot more complex than most people consider.
The laws of physics are inconvenient, but real. That's why testing in your specific circumstances matters and it's why good design matters.
Cheers, Colin (http://www.xdelta.co.uk).
HBVS does synchronous writes, ie all writes will complete before the IO operation returns completion to the user.
The limit to maximum achievable IO rate is IO latency.
Take a local EVA on direct path from a rx6600. You can typically achieve a raw disc write latency of just under 1 millisecnd - that's around 1000-1500 IOs / second, maximum, to that disc.
Add HBVS to that with multiple local direct path EVAs (say 3 way) and given how HBVS work, you'll probably achieve around 1.5 milliseconds at best, say 2 milliseconds. That's around 500 IOs/second, maximum, to that DSA device.
Now pull the EVAs apart to say 100 mile separation and add distance latency - it's easy to drop to around 100 IOs/sec max, with direct path fibre.
MSCP serving over IPCI is not direct path. You have the extra overhead of MSCP transactions and you have the extra overhead of the IP stack and the fact that an IP / ethernet network behaves completely differently to fibrechannel direct path. It's easy to double or triple the IO latency.
So, it's really easy to end up with a quite small maximum achievable IO write rate. Sometimes that's too small for your applications to cope with.
MSCP serving over IPCI will work, but, as you've found, it will have worse IO write latency than direct path fibrechannel.
More overhead = worse latency = what you think of as poor performance under your specific circumstances relative to your expectations.
Now think how that affects normal operations (which are generally small block count individual IO writes) and shadow copy / merge operations (which sensibly try to group up IOs into large block count IO writes) - and the effect hat has on both the latency and throughput. Latency governs IO response and thus IO rate, bandwidth (and things like transmit window size in the network) govern IO throughput.
This stuff is complex under the hood with all the different effects at the different levels of the assortment of protocol stacks and drivers - a lot more complex than most people consider.
The laws of physics are inconvenient, but real. That's why testing in your specific circumstances matters and it's why good design matters.
Cheers, Colin (http://www.xdelta.co.uk).
Entia non sunt multiplicanda praeter necessitatem (Occam's razor).
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP