BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

is Virtual Connect stacking required?

 
Highlighted
Esteemed Contributor

is Virtual Connect stacking required?

is Virtual Connect stacking required?

<SHORT>

Is there any requirement other than possible
data flow that forces the need for stacking?
(e.g., required for firmware or config updates).

For example, say we have
VC1,2 for strictly production/client network,
with uplinks going to production backbone,
and
VC3,4 are using their own uplinks for isolated iSCSI traffic.

In this scenario, there will be no data packet flow
from VC3,4 out thru VC1,2.

So why would a vertical stacking be necessary?


Further,
Is horizontal stacking used for anything other than
possible standby to active data flow?

In an active/active setup,
with separate uplink sets for "left" and "right" traffic flow,
it seems that horizontal stacking would not be necessary.

In a VC-Flex10, this would free up X7 and X8,
allowing the use of all 8 uplinks for data packets.
((VC 1/10G enet are always
  horizontally stacked via port X0
))

</SHORT>




<LONGER>

Current setup

VC1 1/10G Enet ......... VC2 1/10G Enet
  used for production network into
  Vmware ESX hosts.

VC3 1/10G Enet ......... VC4 1/10G Enet
  used for iSCSI for Vmware ESX hosts

VC1 - VC3 - vertically stacked via CX4
VC2 - VC4 - vertically stacked via CX4


VC1 & VC2 use their other CX4 uplink
to connect to the Cisco backbone.

******
We are replacing the VC1&VC2 with new
VC-Flex10 modules.
******

The Flex10 VCs only have ONE CX4 connection,
which would be needed to stack vertically,
if stacking is required.
I was hoping NOT to stack vertically, thereby
freeing up the CX4 for additional 10G uplink.
((The backbone switches have an SFP slot open, to which
   we will also be uplinking via "SFP cable"s.
))

</LONGER>


tks
bv

"The lyf so short, the craft so long to lerne." - Chaucer
32 REPLIES 32
Highlighted
Respected Contributor

Re: is Virtual Connect stacking required?

I see no reason for vertical stacking in your configuration.  

 

I would leave 7 & 8 connected - or at least 8.  The horizontal link will keep traffic internal to the chassis for any server to server traffic - rather than going up to the switch and back.  By leaving both you gain a 20Gb pipe for side to side traffic.  

Highlighted
Esteemed Contributor

Re: is Virtual Connect stacking required?

Thanks, Psyco
(that is your nickname, right ?>)

*********
1)
*********
>I see no reason for vertical stacking in your configuration.

I don't either, but my question was kinda,
"Is that absolutely true?"

I.e., is the vertical stack absolutely only for
vertical module-to-module vNet data traffic,
where uplinks from VC1,2 are being shared by
VC3,4 to get to the external network.
  ((E.g., a blade might be dual-homed, with, say,
        NICs 1,2 teamed in VLANx
    & NICs 3,4 teamed in VLANy.
  ))

I hear that you are saying "yes",
but do you know for certain ;>)


*********
2)
*********
>The horizontal link will keep traffic internal ...


Ahhh, yes!!
I was imagining traffic always within one VC
  from BladeX/NIC1
  to      BladeY/NIC1
.

!!!!
And
!!!!
I just realized that I have misconfigured
a vMotion network.

I configured two separate vNets
     vMotion-left
&  vMotion-right
basically just to be consistent with all
the other vNets defined in left/right pairs
in an Active/Active design.


I was forgetting, that  traffic might need to be
  from BladeX/NIC1
  to      BladeY/NIC2
.
E.g., in vMotion from BladeX to BladeY,
if
  BladeX wants to send Vmotion out NIC1
but
  BladeY/NIC1 is down
(unlikely but I suppose possible)
then the transfer cannot happen.

I was wondering why HP showed just ONE vMotion
vNet and put both NICs in it.


So, horizontal link will stay.
((And a redesign of the vMotion vNet!
))




tks

"The lyf so short, the craft so long to lerne." - Chaucer
Highlighted
Respected Contributor

Re: is Virtual Connect stacking required?

Just don't call me Francis :)

 

For that vertical link, with your config I don't see a need to have it.   If you were sharing uplinks between those interconnects or if you want the two networks to talk to each other then yes.  But since those are two separate networks with their own uplinks it isn't needed.  

 

Actually let me amend that the only thing that comes to mind would be if that iSCSI network is entirely isolated that link could be used to possibly tunnel through for management or monitoring.

Highlighted
Honored Contributor

Re: is Virtual Connect stacking required?

One thing which you may be overlooking.

 

VC Manager only runs on MOdule 1 or Module 2 (not 3 or 4).

 

Without stacking links, VCM knows nothing about 3 or 4, or anything connected to, or passing through 3 or 4.

 

You will not be able to configure the ports in modules 3 and 4, through VMC, and I also suspect that you will have problems with your profiles if VCM cannot talk to Modules 3 & 4.

 

The only way for VCM to talk to modules other than 1 and 2, is through stacking links.

 

If the stacking links are absent...

 

????

 

Dave.

Highlighted
Respected Contributor

Re: is Virtual Connect stacking required?

"With VCM 3.10 and higher, the primary modules can be in bays other than 1 and 2."

 

With that in mind, unless those 4 interconnects are in the same domain you should be able to break that connection as long are you are current on your firmware.

 

 

Highlighted
Honored Contributor

Re: is Virtual Connect stacking required?

You may be right about VCM being able to run on other modules than 1, or 2, however I still contend that without vertical stacking links, VCM will not be able to communicate or configure ports on modules which are not vertically connected (stacked).

 

Could  be wrong (which is why we invested in a Test environment).

 

I would be interested in the results of working with unstacked modules.

 

Dave.

Highlighted
Honored Contributor

Re: is Virtual Connect stacking required?

Sorry, just read your second sentence.    This implies that you need to run two separate VC domains within the same enclosure, i.e. to instances of VCM.    (I dont see any immediate objection to this, although I am a little concerned about server profiles spanning to different domains).    

 

Dave.

Highlighted
Esteemed Contributor

Re: is Virtual Connect stacking required?

Wow,
I have to start typing faster.

I was in the middle of a long-winded reply addressing what you guys just discussed in the last hour!

Actually, I'm MAJORLY concerned about the server profile point in the two-domain scenario.

But, hey Dave, *you're* the one with the test chassis !>).

VC 3.15 Setup and Install Guide says
  "All Virtual Connect Ethernet modules within the
   Virtual Connect domain must be interconnected.
  "
But next sentence says:
  "Any combination of 1-Gb and 10-Gb cables can be
   used to interconnect the modules.
  "
So, how much Meta traffic could there really be?

In the particular config I discussed, where I wanted to free up the CX4 on Flex-10 VC1,2, I guess I could just use a 1Gb link to VC3,4 ....
assuming that stacking *is* required.


Let us know what you find out with your test system, Dave ;>)


tks

"The lyf so short, the craft so long to lerne." - Chaucer
Highlighted
Honored Contributor

Re: is Virtual Connect stacking required?

To be honest Bob, unless you can wait a couple of months, my workload doesn't give me time to do any of the testing you are hoping for.

 

I have to say though, the more I think about it the more objections rise.    For example, how would two VC domains interact with your OA.    Specifically, when you first insert a VC module into the enclosure, the OA detects it and creates a link on the left-hand navigation bar.    If you create two VC domains, then the OA would have to provide two links, or else how would it know which VC domain you want to connect to.

 

Having said that, it is still the profile issue that stops me.   I suppose it is feasible to split your servers so that they only use either the onboard nics via IC Bays 1 and 2, or they only use Mezzanine card in Mezz 1 to communicate with Bays 3 & 4.    You would have to disable the onboard nics on these servers to stop them trying to link up to bays 1 & 2.

 

the servers can only have one profile, and the profile could only exist in one VC domain.    Each domain would have to have its own set of WWD's and MAC's.     (I dont think you mentioned whether your storage is internal or external, FC or SCSI), but you would have the same issues with FC VC modules (if you are using them.)

 

Bottom line is:   I dont really know if this is doable, however it is sufficiently scary to make me stay away.     I guess there would have to be an over-powering reason to put yourself out there.     You need to do a "cost-benefit analysis",   just to be sure that what you gain on the "swings" you don't lose on the "roundabout".

 

Dave.