HPE EVA Storage
1753823 Members
9345 Online
108805 Solutions
New Discussion юеВ

Re: single initiator -- single target zoning (EVA)

 
SOLVED
Go to solution

single initiator -- single target zoning (EVA)

Hi,

i`m little confused about "single initiator -- single target zoning" what will be the target.

As far as I now, a (scsi)-target of a storage system is thevolume (LUN), not the controller-port, presenting the LUN. (Thats the reason for the need for the command console LUN, because Command View needs a target to talk to)
I`ve talked to a HP service technician, stating "single initiator -- single target zoning" is zoning with two members in each zone only. (HBA1, EVA-ctrlA-FP1)

So you would need 8 zones per Server, accessing an EVA6400 for example.

In my opinion, i would put all eva-hostports of one fabric and the server hba in one zone, resulting in two zones per server accessing an EVA6400 (for redundant san┬┤s)
(HBA1, EVA-ctrlA-FP1,EVA-ctrlB-FP1,EVA-ctrlA-FP3,EVA-ctrlB-FP3) resulting in 5-member-zones.

What is the HP-recommended zoning-flavor and what would be the disadvantage of the simpler "5-member-zoning"?

Thank you for replys.
12 REPLIES 12
Uwe Zessin
Honored Contributor

Re: single initiator -- single target zoning (EVA)

The individual EVA port is the SCSI target and an individual virtual disk is mapped to that target's LUN address space, e.g. LUN address 1.

At least on the EVA-4400 single-initiator/single-target is a *MUST*. Looks like the controller firmware was not tested for other cases... and the system can *crash* if you don't follow that rule.
.
Torsten.
Acclaimed Contributor

Re: single initiator -- single target zoning (EVA)

I agree, the 1-zone-per-server-per-fabric is probably the best solution (your "5-member-zoning").

Of course you can create zones for each HBA to storage port connection, but this makes IMHO no sense and results in too many of them.

The purpose is to ensure there is no other OS in the same zone, so you could even put all HP-UX hosts and all storage ports in a single zone, but better divide the certain hosts.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Jeff_Traigle
Honored Contributor

Re: single initiator -- single target zoning (EVA)

We had multiple initiator-multiple target zones set up in our previous SAN. HP told is it was supported, but not recommended. They said each zone should have a single initiator. I think I was told each zone should have only a single target also.

SNIA's recommendation is for single initiator zoning. According to their Networking Foundations course, it is perfectly acceptable to have multiple targets in the zone with that one initiator.

We debated this internally when setting up a new SAN and opted for the less bulky single initiator-multiple target zoning you described.
--
Jeff Traigle

Re: single initiator -- single target zoning (EVA)

@ Uwe: I know the Support Document for the 4400 but there is only the management (CV) server zone discussed

"Regarding management zoning, split the management zones making them single-initiator/single-target. To do this, create two management zones (using FC HBA port based zoning) on each fabric... "

Not the normal Servers...
Alzhy
Honored Contributor

Re: single initiator -- single target zoning (EVA)

If you are NOT using Continous Access - then the EVAs should not "see each other".

We used to have zones like these:

PLUTO_DISK_A: PLUTO_HBA_A; EVA1_A_FPS; EVA2_BFPS; XP12K_A_CLUS

PLUTO_DISK_B: PLUTO_HBA_B; EVA1_B_FPS; EVA2_B_FPS; XP12K_B_CLUS

We've revamped it to:

PLUTO_EVA1_A: PLUTO_HBA_A; EVA1_A_FPS
PLUTO_EVA2_A: PLUTO_HBA_A; EVA2_A_FPS
PLUTO_XP12K_A: PLUTO_HBA_A; XP12K_A_CLUS

PLUTO_EVA1_B: PLUTO_HBA_B; EVA1_B_FPS
PLUTO_EVA2_B: PLUTO_HBA_B; EVA2_B_FPS
PLUTO_XP12K_B: PLUTO_HBA_B; XP12K_B_CLUS


The CommandView Zones for the EVAs however remained single initiator, multi-target:

CMVW_A: CMVW_HBA_A; EVA1_A_FPS; EVA2_A_FPS
CMVW_B: CMVW_HBA_B; EVA1_B_FPS; EVA2_B_FPS



HTH.

Hakuna Matata.
Bulent ILIMAN
Trusted Contributor

Re: single initiator -- single target zoning (EVA)

the point mentioned by single initiator single target meant to be single EVA/tape Library/any target box and single host in a single zone,

so it is a good solution to put all of the ports of your target in an alias and put all ports of your hosts that will access to that target in another alias and zone that two aliases.

purpose of the zoning is mainly blocking the broadcasts send from host ports to be seen from other hosts, same story for the target side,

So you should neither put two initiator ports from two different initiators in the same zone, nor from two different targets, say storage and tape.

Re: single initiator -- single target zoning (EVA)

@Alzhy: Your Configuraion for the CommandView Zones for the EVAs is wrong! In this case the two EVAs see each other! You should cerate four separate Zones:

CMVW_A__EVA1: CMVW_HBA_A; EVA1_A_FPS CMVW_B__EVA1: CMVW_HBA_B; EVA1_B_FPS CMVW_A__EVA2: CMVW_HBA_A; EVA2_A_FPS
CMVW_B__EVA2: CMVW_HBA_B; EVA2_B_FPS

This way CV can see both EVAs and the EVAs donts see each other.

This was NOT my Question, to put Target Ports of different EVAs in one Zone (Except for CA you should never do)

My Question was if i can put multiple Host Ports of the same EVA in one zone.

Thank You anyway.



Re: single initiator -- single target zoning (EVA)

Sorry the formatting was gone....

i ment:

CMVW_A__EVA1: CMVW_HBA_A; EVA1_A_FPS
CMVW_B__EVA1: CMVW_HBA_B; EVA1_B_FPS
CMVW_A__EVA2: CMVW_HBA_A; EVA2_A_FPS
CMVW_B__EVA2: CMVW_HBA_B; EVA2_B_FPS



Don Mallory
Trusted Contributor
Solution

Re: single initiator -- single target zoning (EVA)

I think part of the confusion here is fibre channel initator/target vs SCSI initiator/target. They are not necessarily the same thing. The FC target is the front-end port of the storage device. The SCSI target is the LUN.

LUN masking or presentation management restricts which LUN can be seen by which individual host (or array if replication is used in which case an array is now an initiator, and the FC port must be configured as appropriate...). Zoning limits which hosts (or arrays) can see which storage devices.

EMC and NetApp also strongly recommend single initiator -- single target zoning. This is imperative for tape (VTL or physical) due to the broadcast of bus resets, usually on shared tape drives (multiple initiators), or as a result of an eject sequence.

In earlier revisions, the HP EVA best practices guide suggests numerous methods of zoning, all will technically work, including:

- All initiators of the same OS with all of the storage of the same type
- All initiators of the same OS with all of the targets of a single device
- All initators of the same host with all of the storage of the same type
...
right down to single initiator -- single target.

As a result of this lack of direction (and other technical disagreements with the doc), I stopped using this as a reference and haven't looked back unless I needed a particular HP specific detail.

The problem, as Ilman put it, is to block broadcasts from other devices. Given that fibre channel is a broadcast based network (a la Ethernet on hubs about 15 years ago...) everyone gets to tromp on everyone else. The way to control this is with a reduction of zones.

So, while yes, you can, you generally shouldn't. It makes it administratively difficult, but at the cost of stability.

In one iteration of an environment I worked with, each host (multiple initiators) was zoned with all of it's disk storage (multiple targets). Tape was separated. This generally worked fine except that there were numerous situations where a single disk failure (squacky disk) caused noise on the external array bus, and ended up taking the other array nearly offline for that series of hosts. Unfortunately this was also not a redundant fabric, so, half the hosts went offline.

Similarly, with the tape devices, if one host rebooted, all of the running backups or tape clone operations running from other shared hosts would receive a bus reset, ejecting the tape and failing the backup or clone resulting in a lot of operator intervention.

Either way, it will technically work. If you find it works in your environment and reduces adminstrative overhead with no negative consequences, excellent, please share your experiences. If it doesn't work out so well, please share as well. As with all things in technology, they change with time...

Best regards,
Don