Disk Enclosures

MSA2000: Difference between point-2-point and loop topology

Go to solution
Patrick Terlisten
Honored Contributor

MSA2000: Difference between point-2-point and loop topology


can someone tell me when I should use p2p and when loop? I've read many things about the MSA2000 and their path handling, active / active and active / passive. What is best practice for a switches enviroment and why is it best practice??

I've read the documentation, talked to HP, played with the box - but no "good answer".

Best regards,
Best regards,
Uwe Zessin
Honored Contributor

Re: MSA2000: Difference between point-2-point and loop topology

'loop' (FC_AL protocol) is usually used for direct-attach (server to storage), although, technically, that is a point-to-point link.

Point-to-point implies Fibre Channel (fabric) protocol, which (I am sure you know) does not work in a direct-attach (server to storage) environment - no SNS, etc.

Now, the "HP StorageWorks2000 Family Modular Smart Array reference guide" (481499-001) says on page 42:
* Setting FC Host Port Topology

For MSA2000 Family storage systems, topology means the path that data travels between devices: either through a series of connected devices (loop) or directly from one device to another (point-to-point). In a switch-attach configuration, either topology is supported but loop is preferred[1]. In a direct-attach configuration, only loop is supported.""

[1] I have not seen **why** the box prefers loop in a switch attachment and I have no idea, either.

Another puzzling sentence is one paragraph later when talking about controller failovers:
* If one or more host ports are set to point-to-point topology, controller B presents its volumes on half of its host ports and presents controller A's volumes on the remaining host ports.""

A controller has two host ports. Does B present its volumes on Port 0 or Port 1? Do we have to find out on our own? Is this dependend on the phase of the moon, or what?
What happenes to the volumes that were presented to both host ports of a single controller - from what I have seen, this is possible. From the sentence I might beleive that half of the paths go away after a controller failover!

Sight, really, who writes such sentences and who does the technical proof reading ??
Patrick Terlisten
Honored Contributor

Re: MSA2000: Difference between point-2-point and loop topology

Hello Uwe,

thanks for your reply. Yes, I know the difference between loop and p2p. I have seen many storages. I know how an A/A storage "should" work. And then I saw the port state on a fc-switch with a MSA2000 attached... Loop?! WTF?! After reading the reference guide I was... mmh... confused. I tripped over the same sentences like you. It's the worst box I ever worked with.

So, what should I do? I created two vdisks, one is owned by controller A, the second one by controller B. I have two paths to each volume (always over the owning controller). I will do some tests tomorrow (simulating path failures, controller failover, switch failures) and I hope for a miracle... *grmpf*

But it's good to see that I'm not alone. :)

Best regards,
Best regards,
Patrick Terlisten
Honored Contributor

Re: MSA2000: Difference between point-2-point and loop topology


I did some tests today and it was a disaster...

MSA2212fc is wired like the docs told me:

Switch01: Controller A, Hostport 0 and Controller B Hostport 1
Switch 02: Controller B, Hostport 0 and Controller A, Hortport 1

The host ports are configured as point-to-point. The interconnect is disabled.

I see two paths to each volume, one over HBA1, Port0, and one over HBA1, Port1 (dual-port HBA is used). The server is running DataCore SANmelody. SANmelody covers the pathmanagement. The MSA is configured with two Vdisks, one is owned by Controller A, and one by Controller B. Some volumes are carved out of Vdisks1 (Controller A) and were presented to the server.

I did some IO with IOmeter on the server and did a shutdown on controller A over the webmanagement. Nothing happend... the eventlog was filled up with failures, the disk was no longer available.

Is there any difference between a shutdown and a removal of the controller? Will do the controller no failover if I shutdown it??

Best regards,
Best regards,
Mohan Wickramasinghe
Occasional Contributor

Re: MSA2000: Difference between point-2-point and loop topology

Hi Patrick

I saw this article and thought you could help me out.
I am brand new to MSA and SAN for that matter.

I have a MSA2000sa G2 with dual controllers (A and B) with 4 SAS ports on each controller. The controller name is MSA2324sa.

These are directly connected via mini SAS external cables (1m) to two Proliant 585 G6 servers.

The proliant servers each have 2 HBA cards (SC08e - 6G speed). Each card has 2 SAS ports

So in Server1, one port from each HBA card is connected to Controller A and B 's SAS port 1.
Similarly in Server 2 but connected to Controller A and B's SAS port 2.

The boxes are running Linux with multipath installed (native from os).

And in the MSA has 3 vdisk and configured.
And a volume on each vdisk is created.
The vdisks are RAID 10 with six 73GB SAS 15k small form factor drives.

The multipath daemon detect the 3 volumes or LUNs in /dev/mapper/mpathX

And each volume is mapped in multipath as

/dev/sda, /dev/sdd - for /dev/mapper/mpath1
/dev/sdb, /dev/sde - for /dev/mapper/mpath2
/dev/sdc, /dev/sdf - for /dev/mapper/mpath3

Once this was there, I did fdisk and created an ext3 partition on /dev/sda, /dev/sdb, /dev/sdc

Once done, I ran kpartx -a /dev/mapper/mpath1 /dev/mapper/mpath2 /dev/mapper/mpath3

then I did multipath -F; and multipath -v2 and now can see /dev/mapper/mpath1-part1 .. mpath2-part1 and mpath3-part1

And the I formated these devices as follows.
mke2fs -j /dev/mapper/mpath1-part1

then mounted the devices as follows
mount /dev/mapper/mpath1-part1 /mpath1

this partition/volume/lun was ~ 120GB

and I ran the following 'dd' test as follows

time sh -c "dd if=/dev/zero bs=8k count=15728640 of=/mpath1/ddfile && sync"

and which gives following ...

time sh -c "dd if=/dev/zero bs=8k count=15728640 of=ddfile && sync "

15728640+0 records in
15728640+0 records out
128849018880 bytes (129 GB) copied, 939.366 s, 137 MB/s

real 16m46.889s
user 0m3.010s

and the multipath configuration is as follows ...

defaults {
udev_dir /dev
polling_interval 10
path_grouping_policy failover
#path_grouping_policy multibus
# comment out round-robin 0 when multibus is used
selector "round-robin 0"
getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
prio_callout "/bin/true"
user_friendly_names yes
path_checker tur
rr_weight uniform
#rr_weight priorities
failback immediate
rr_min_io 100
#no_path_retry fail
no_path_retry 12
devices {
device {
vendor "HP"
product "MSA2324sa"
devnode_blacklist {
devnode "^cciss!c[0-9]d[0-9]*"
devnode "^vg*"

the issue is, the speed we are getting. we were expecting much better I/O performance from this system. We see a very heavy I/O wait during this test and during running applications. The waiting for I/O percentages goes close to 50% .

Any advice is greatly values.

thank you in advance.