- Community Home
- >
- Servers and Operating Systems
- >
- Server Clustering
- >
- MSA1000 with Marathon Cluster
Server Clustering
1753797
Members
7822
Online
108805
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-18-2005 01:15 AM
тАО07-18-2005 01:15 AM
MSA1000 with Marathon Cluster
We have a Proliant Marathon 6200 Cluster.
DL580 CE's and ML530 IOP's. We wish to put the Database on MSA1000 SAN connect External Storage.
Is this Posible? If so - please advise guidelines on the Setup of the MSA LUNS.
Also will Marathon 6200 support Dual FC Paths ? Do you require SecurePath for the IOP severs to do this?
Regards
Brian
DL580 CE's and ML530 IOP's. We wish to put the Database on MSA1000 SAN connect External Storage.
Is this Posible? If so - please advise guidelines on the Setup of the MSA LUNS.
Also will Marathon 6200 support Dual FC Paths ? Do you require SecurePath for the IOP severs to do this?
Regards
Brian
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-18-2005 03:02 AM
тАО07-18-2005 03:02 AM
Re: MSA1000 with Marathon Cluster
I would contact Marathon to find out if it is officially supported, but i believe that it will work. You do not run SecurePath in this configuration - you don't need it because you already have dual paths due to the IOPs.
You set up the LUNs on the MSA the same way as if they were standard servers. In other words, just make sure you have double LUNs (one for each IOP). Also, you want to use Selective Storage presentation (SSP) to make sure that none of the connections are shared between the IOPs.
So for example, say you wanted 2 functional LUNS, LUN 1 = 100 GB, and LUN 2 = 200 GB. On the MSA you would make 2 x 100 GB LUNs and 2 x 200 GB LUNs. Set SSP so that one of the 100 GB LUNs had exclusive access only by IOP 1, and the other 100 GB LUN had exclusive access only by IOP 2. Then do the same for LUN 2.
You do not want to have a single LUN that is shared by both IOPs.
You may be able to get some more info here:
http://h71019.www7.hp.com/ActiveAnswers/cache/150470-0-0-225-121.html
or just do a search for "Marathon Technologies" on the main page of the hp.com web site.
Thanks,
Doug
You set up the LUNs on the MSA the same way as if they were standard servers. In other words, just make sure you have double LUNs (one for each IOP). Also, you want to use Selective Storage presentation (SSP) to make sure that none of the connections are shared between the IOPs.
So for example, say you wanted 2 functional LUNS, LUN 1 = 100 GB, and LUN 2 = 200 GB. On the MSA you would make 2 x 100 GB LUNs and 2 x 200 GB LUNs. Set SSP so that one of the 100 GB LUNs had exclusive access only by IOP 1, and the other 100 GB LUN had exclusive access only by IOP 2. Then do the same for LUN 2.
You do not want to have a single LUN that is shared by both IOPs.
You may be able to get some more info here:
http://h71019.www7.hp.com/ActiveAnswers/cache/150470-0-0-225-121.html
or just do a search for "Marathon Technologies" on the main page of the hp.com web site.
Thanks,
Doug
I am an HPE employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-18-2005 03:55 PM
тАО07-18-2005 03:55 PM
Re: MSA1000 with Marathon Cluster
As I remember, the IOPs in a 6200 need to be connected to independent storage. It may be OK to have the independent storage be separate LUNs but the key thing is that IOP1 is connected to one set of LUNs and IOP2 is connected to another set of LUNs. Obviously the LUN sets for each IOP need to match.
If your using an MSA1000 and a 6200 then you'd HAVE to have dual paths if you want to keep a FT system.
If your using an MSA1000 and a 6200 then you'd HAVE to have dual paths if you want to keep a FT system.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-19-2005 12:13 AM
тАО07-19-2005 12:13 AM
Re: MSA1000 with Marathon Cluster
Since you have 2 IOPs, each connected to storage, you have "dual" paths to the MSA - one from each IOP. If you run SecurePath you would actually have 4 paths instead of 2 (2 from each IOP). Certainly this would provide an additional level of FT, but my concern would be the interaction of SecurePath and the Marathon software. I can't recall if SecurePath is certifed with the MSA1000 and Marathon.
Doug
Doug
I am an HPE employee
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP