ProLiant Servers (ML,DL,SL)
1752287 Members
4677 Online
108786 Solutions
New Discussion юеВ

Proliant Server and Storage forums

 
DaMiBu
Advisor

Proliant Server and Storage forums

Anyone know of any higher end HP Proliant communities or forums? I am looking for a place to interact and meet with other professionals who work work in large scale proliant shops to discuss architecture and design ideas.

Thank you

DB
6 REPLIES 6
Steven Clementi
Honored Contributor

Re: Proliant Server and Storage forums

Darren:


I think a lot of the participants here in ITRC are just that. Professionals that work in large Proliant (and HP hardware in general) shops.

I am a consultant and for the past five years have designed and implemented many different Storage Area Networks for Large and Small companies which have included hardware from the Proliant and Storageworks lines as well as IBM's TotalStorage and xSeries lines.

As for knowing of any other places... this is the only once I visit regularly.


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
DaMiBu
Advisor

Re: Proliant Server and Storage forums

Thank you Steven

I will give this forum another try!

Andy_180
Trusted Contributor

Re: Proliant Server and Storage forums

Hey Darren- We have about 400 proliants stretched from the USA, coast to coast. all the way from a couple 1850r's to DL760G2s clustered with W2k3-E. and everything in between.
EVA 5000, MSA20s,30s... MSA 1500s, MSA 1000s. 22 U racks, 48 U racks. my company is married to HP. Did you have a specific question?
thanks!
--Andy
DaMiBu
Advisor

Re: Proliant Server and Storage forums

I have lots of questions! I have about 250 Proliants (mostly DL580, DL380, BL20p, BL30/35p)

..but here is just one question re: Teaming of the NICs.

We connect all systems to back bone Cisco CAT4000 switches but these switches only have a 1GB up-link between them. To ensure server to server traffic stays on the same switch we set the Team manually to NFT with Preference order with the NIC1 as the primary and all NIC1's are on the same switch.

This works HOWEVER

1. it is a manual step that we need to apply so it is missed a lot of the time
2. sometimes during either driver of NIC management upgrades the team settings change (very annoying)
3. I have a concern if the primary switch connection has intermittent issues then the NICs may flip/flop between the two.


What I want to do is use the Automatic/Automatic setting but also try keep traffic off the 1GB uplink.

Not necessarily looking for a definitive answer - just like to talk about the issue/ setup with fellow professionals to see what others are doing.

What are you oppinios on this?

Andy_180
Trusted Contributor

Re: Proliant Server and Storage forums

Hey Darren- the nics you would have (assuming they are the NC series) support load balancing with fault tolerance. Although the early ones were only 10/100. I've had trouble with GB nics in the DL 580G1s. we have most of our field offices and both of our data centers with a redundant switch configuration much like yours. One nic goes into the top switch and nic 2 goes into the bottom switch in each rack. I set our nics to automatic/automatic at the advice of our networks group and haven't had any issues. I think we had a switch die about a year ago and we got a bunch of "nic redundancy errors" from SIM but no servers went down. If the nics and switches are redundant and configured correctly there should be no flip flopping." The only time I have seen the flip flopping is on 360G2s when the nics would flake out and die on us. We have replaced mother boards in at least 15 dl 260G2's over the last 18 months.
Thanks!
--Andy
DaMiBu
Advisor

Re: Proliant Server and Storage forums

I am not worried about the NICs and how reliable every thing is as all is ok!

However├в ┬ж..if I leave my servers at Auto/Auto then the NICs will talk via both back bones switches and the 1GB link between the two gets saturated. Internally on the switched you have a 36Gbit backbone so it is advantageous to keep the systems talking on the same switch unless a failure occurs and the fail to the other switch.


Although├в ┬жmaybe HP have some setting to make this happen automatically, i.e. keep traffic local at all times and maybe this is an option when using the advanced options in the NIC teaming software that you need to purchases.

Do you understand what I am asking?