- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- iSCSI and reliability best practices
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-08-2009 10:08 AM
тАО04-08-2009 10:08 AM
I have an HP MSA 2000i as SAN (iSCSI) and since I'm starting with SANs,
I'm seeking for advice. We are currently using HP Proliant 380 G5 + MSA
70 (SAS, direct-attached storage) for our production servers.
At first, I thought I'd be using the SAN only for non-critical
operations, like temporary additional storage for disk-based backups and
snapshots, so I planned on using my existing network equipement to
connect the SAN. Now I'd like to use it as storage for some blade
servers that I want to use for virtualisation, so reliability is more
important.
I'd just like to evaluate the risk, to make sure I plan correctly. The
SAN is a dual controller, dual-port, with a mix of RAID10 and RAID 50
arrays, so this is pretty solid. However, if I'm using only one switch
to connect the initiators to this SAN, is it more risky than relying on
the non-redundant RAID controllers that are in my HP servers?
If it is recommended that I use multipathing, I guess that means 2
switches, additionnal nics but also another interconnect in the blade
enclosure?
If you need more information, just let me know :) .
Thanks in advance,
Ugo
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-08-2009 11:58 PM
тАО04-08-2009 11:58 PM
SolutionAny single point of failure is not good. I think you would be at a greater risk only having one switch. If I understand what you have said correctly and you loose a raid controler in one server you will only loose that server. If you loose the switch then all the server's which access the san via that switch will be lost.
Best practice for H/A is always to have at least two paths from server to storage, obviously there can be a considerable cost for this. It is a commercial decision but if it is production data then normally it's the only decision.
Hope this helps,
D
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 12:45 AM
тАО04-09-2009 12:45 AM
Re: iSCSI and reliability best practices
Use 2 HBA and have them conneceted to 2 switches.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 02:48 AM
тАО04-09-2009 02:48 AM
Re: iSCSI and reliability best practices
Regarding switches, what if I buy 2 stackable switches like the procurve 2900, and I use both for both data and storage network (using vlans)... If I stack them, I'll be able to do bonding (NIC teaming) on my servers so that one NIC of the team is one one switch and the other NIC on another switch. But in such a stacked config, what happens if one of the switch fails?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 03:11 AM
тАО04-09-2009 03:11 AM
Re: iSCSI and reliability best practices
yes, you will need two switches and two Virtual Connect modules. They will represent two redundant NW connections. As for the stackable switches, it is always better to use independent pieces of hardware that are fully redundant.
J.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 03:26 AM
тАО04-09-2009 03:26 AM
Re: iSCSI and reliability best practices
Yes you are right in the first scenario...
Regarding Procurve Switches, I donot have an idea..
If your SAN environment is growing then you will be needing to go for Core Edge Topology.
Check if your infrasturcture provides ROI..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 03:37 AM
тАО04-09-2009 03:37 AM
Re: iSCSI and reliability best practices
The real accesspoint (for the operating system) is still the on-board LAN or mezzanine LAN adapter _on_ the blade.
If you are using Windows and the Microsoft iSCSI initiator, you don't even need stackable switches:
- use the initiator's MPIO (MultiPath I/O) feature with load-balancing
-- no problem with MAC addresses floating around
"Core Edge Topology"? Isn't that a little overkill?
I thought we are talking about _one_ small workgroup array!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-09-2009 04:12 AM
тАО04-09-2009 04:12 AM
Re: iSCSI and reliability best practices
I'll be using linux initiator with multipath.
I know stackable switches are not needed, but I'd like to be able to use my switches for both storage and data traffic. By stacking switches, it allows me to create a LACP trunk on ports on different switches. Maybe I should post this in the networking forum...?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-06-2010 06:21 AM
тАО01-06-2010 06:21 AM
Re: iSCSI and reliability best practices
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-06-2010 06:24 AM
тАО01-06-2010 06:24 AM