- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- Re: 4-port shared storage module with 4 nodes
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2003 06:57 AM
тАО08-05-2003 06:57 AM
4-port shared storage module with 4 nodes
We are connecting 4 DL380 3G with SA5i+ to a SACS with the 4 port shared storage module.
We have upgraded to the latest firmware updates, 1.70 for the SACS and 2.38 to the SA5i+.
We are using the SSP with only one Logical Drive and it's viewed only by one Proliant DL380 3G.
First: we boot the SACS and we wait for startup complete.
Second: Boot two DL380 connected at A1 and B1 ports. It works properly
Third: Start DL380 connected at B2 port. It also works well
Fourth: When we try to start the fourth DL380 connected on the A2, it gets hanged on the 5i BIOS messages. It shows a lot of -\|/ symbols and after it gets hanged. The DL380 connected to the A1 also stop working with the SACS.
We have made a lot of tests and it's always the same: The channel A1/A2 doesn't work with two servers.
We also have proof with the SA532, but nothing new.
Any ideas?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2003 09:31 AM
тАО08-05-2003 09:31 AM
Re: 4-port shared storage module with 4 nodes
If you are doing one total LD that is being accessed by all 4 servers, then that is your problem. Each server must have its own dedicated LD - so you need at least 4 LDs. Then you use SSP to assign access to only one server. You cannot have all 4 servers accessing the same LD.
Thanks,
Doug
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-05-2003 10:43 PM
тАО08-05-2003 10:43 PM
Re: 4-port shared storage module with 4 nodes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-06-2003 06:21 AM
тАО08-06-2003 06:21 AM
Re: 4-port shared storage module with 4 nodes
The only other thing I can think of, which you may have already tried, is to change the cabling on the servers to the SACS - use different servers than you have now. That would rule out a bad 5i. I would also recheck your SSP settings - if the A2 servers has been accidentally given access to the LUN via SSP, this could cause problems as well. Another thing you could try would be to create 3 more LUNs in the SACS, and then give each of them exclusive access (via SSP) to the other 3 servers. I don't know, but maybe the A2 server wants to see a LUN on the SACS.
Thanks,
Doug
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-11-2003 06:48 PM
тАО08-11-2003 06:48 PM
Re: 4-port shared storage module with 4 nodes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2003 07:15 AM
тАО08-18-2003 07:15 AM
Re: 4-port shared storage module with 4 nodes
1) 2 servers with multi-path SW (standalone or clustered)
or
2) 4 servers single path (non-clustered)
Thanks,
Doug
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-25-2003 02:21 PM
тАО08-25-2003 02:21 PM
Re: 4-port shared storage module with 4 nodes
I'm curious about the 4-port limitations ...
Do you know what are the hardware limitations that disallow configurations such as:
a) Two clusters of two nodes.
b) A two node cluster and two independent (non-clustered) nodes.
c) A four node cluster.
Best regards,
Juanjo
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2003 11:59 AM
тАО08-26-2003 11:59 AM
Re: 4-port shared storage module with 4 nodes
As for c)the 4-node cluster, it is "probably" going to work OK, but it is not a tested or supported hp (or Microsoft) configuration. Because all of the nodes are in the same cluster, the bus reset issue does not matter, since being part of the cluster requires them to react to the bus resets. However, the 4-port module was not intended for use with 4-node clusters, and it has not been tested in that environment. HP would recommend that you use an MSA1000 storage box for clusters with more than 2 nodes.
Thanks,
Doug
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2003 01:44 PM
тАО08-26-2003 01:44 PM
Re: 4-port shared storage module with 4 nodes
And hence, the negotiation process (adapter ID change, ...) does not depend on the configuration, that could be guessed from the SSP settings.
Being those limitations at the OS level, and primary related to SCSI bus resets, I will investigate if the clustering software I want to install (RedHat's Cluster Manager) resets the SCSI bus when fails-over. In any case I know configurations a) and b) are not supported by HP, in fact, according to http://h18004.www1.hp.com/products/quickspecs/11050_div/11050_div.html RedHat's Cluster Manager is not supported at all.
Finally, I think that the OS on a non-clustered server should also be able to react to SCSI bus resets, because even when configuring 4 independent servers on a SACS, one of them may generate a SCSI bus resets (due to a defective cable, for example) and those resets will be received by the 3 remaining servers.
Best regards,
Juanjo
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-05-2004 08:38 PM
тАО04-05-2004 08:38 PM