HPE EVA Storage
1748275 Members
3870 Online
108761 Solutions
New Discussion юеВ

iSCSI and reliability best practices

 
SOLVED
Go to solution
Ugo Bellavance (ATQ)
Frequent Advisor

iSCSI and reliability best practices

Hi,

I have an HP MSA 2000i as SAN (iSCSI) and since I'm starting with SANs,
I'm seeking for advice. We are currently using HP Proliant 380 G5 + MSA
70 (SAS, direct-attached storage) for our production servers.

At first, I thought I'd be using the SAN only for non-critical
operations, like temporary additional storage for disk-based backups and
snapshots, so I planned on using my existing network equipement to
connect the SAN. Now I'd like to use it as storage for some blade
servers that I want to use for virtualisation, so reliability is more
important.

I'd just like to evaluate the risk, to make sure I plan correctly. The
SAN is a dual controller, dual-port, with a mix of RAID10 and RAID 50
arrays, so this is pretty solid. However, if I'm using only one switch
to connect the initiators to this SAN, is it more risky than relying on
the non-redundant RAID controllers that are in my HP servers?

If it is recommended that I use multipathing, I guess that means 2
switches, additionnal nics but also another interconnect in the blade
enclosure?

If you need more information, just let me know :) .

Thanks in advance,

Ugo
9 REPLIES 9
DJMC
Regular Advisor
Solution

Re: iSCSI and reliability best practices

Hi Ugo,
Any single point of failure is not good. I think you would be at a greater risk only having one switch. If I understand what you have said correctly and you loose a raid controler in one server you will only loose that server. If you loose the switch then all the server's which access the san via that switch will be lost.
Best practice for H/A is always to have at least two paths from server to storage, obviously there can be a considerable cost for this. It is a commercial decision but if it is production data then normally it's the only decision.

Hope this helps,
D
Sivakumar MJ._1
Respected Contributor

Re: iSCSI and reliability best practices

It is always recommended to use 2 switches for redudancy.

Use 2 HBA and have them conneceted to 2 switches.



Ugo Bellavance (ATQ)
Frequent Advisor

Re: iSCSI and reliability best practices

So since I'm using a C-3000 blade enclosure with BL495c blades with a virtual connect interconnect, I guess the interconnect is the equivalent of the HBAs in this case, so I guess I'd need 2 switches and 2 virtual connects. Makes sense?

Regarding switches, what if I buy 2 stackable switches like the procurve 2900, and I use both for both data and storage network (using vlans)... If I stack them, I'll be able to do bonding (NIC teaming) on my servers so that one NIC of the team is one one switch and the other NIC on another switch. But in such a stacked config, what happens if one of the switch fails?
Jozef_Novak
Respected Contributor

Re: iSCSI and reliability best practices

Hello,

yes, you will need two switches and two Virtual Connect modules. They will represent two redundant NW connections. As for the stackable switches, it is always better to use independent pieces of hardware that are fully redundant.

J.
Sivakumar MJ._1
Respected Contributor

Re: iSCSI and reliability best practices

Hi Ugo,

Yes you are right in the first scenario...

Regarding Procurve Switches, I donot have an idea..

If your SAN environment is growing then you will be needing to go for Core Edge Topology.

Check if your infrasturcture provides ROI..

Uwe Zessin
Honored Contributor

Re: iSCSI and reliability best practices

No, the (virtual connect) interconnect is NOT the equivalent of a HBA - it is just a mechanism to be able to build your own LAN topology and/or do some kind of 'identify management' (= virtualize the MAC addresses) for a blade.
The real accesspoint (for the operating system) is still the on-board LAN or mezzanine LAN adapter _on_ the blade.

If you are using Windows and the Microsoft iSCSI initiator, you don't even need stackable switches:
- use the initiator's MPIO (MultiPath I/O) feature with load-balancing
-- no problem with MAC addresses floating around



"Core Edge Topology"? Isn't that a little overkill?
I thought we are talking about _one_ small workgroup array!
.
Ugo Bellavance (ATQ)
Frequent Advisor

Re: iSCSI and reliability best practices

The blades that will be using the san are BL495c G5, that have dual Flex-10 NICs. So I guess that to make sure that there is no single point of failure, I should have 2 virtual connects, one connecting to each of the Flex-10 NICs?

I'll be using linux initiator with multipath.

I know stackable switches are not needed, but I'd like to be able to use my switches for both storage and data traffic. By stacking switches, it allows me to create a LACP trunk on ports on different switches. Maybe I should post this in the networking forum...?
Ugo Bellavance (ATQ)
Frequent Advisor

Re: iSCSI and reliability best practices

Answering myself, you can't make an LACP trunk spanning two Procurve 2900, since they're only stackable management-wise, what Cisco call a cluster. The backplane is not stackable, so what I did is a NFT trunk instead (failover only, no load-balancing), so I have only 1G available.
Ugo Bellavance (ATQ)
Frequent Advisor

Re: iSCSI and reliability best practices

Another thing that I had to do: order a C-7000 and ditch the C-3000, as it is not redundant. More than 10K in the sink :(. I thought I could order without having to ask HP pre-sales, but it looks like it is not the case.