- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Flex-10 and VMware with P4000
StoreVirtual Storage
1753479
Members
5025
Online
108794
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2011 02:01 AM
02-22-2011 02:01 AM
Flex-10 and VMware with P4000
Hello all,
We are planning an eventual move to P4000 storage and multi-site clustering. I have some questions regarding how to configure Flex 10 and VMware to P4000 storage.
We currently run:
C7000 and Flex10 (also fc modules)
BL460c G6 running ESXi 4.1
EVA4400
Two sites with identical hardware setup
Currently we have a LACP trunk with 3 cables per flex10 module for networking. Connection to EVA4400 is of course via fc modules.
For iscsi switches are planning with dedicated and redundant switches, 2 per site. 1 GbE ports on both switches and P4000 interfaces.
How would you recommed that I configure flex 10 and connect to iscsi switches?
I've seen 1 connection per Flex10-module to each switch. But I don't know if they had 10 GbE switches.
How would you recommend that the connection will be made?
What about the current LACP trunks? My plan is to leave those as they are.
Also, I plan to create 6 or 8 flexnics per servers and configure those for vSwitches in VMware. That according to Best Practices.
Any tips, suggestions regarding this?
Thanks in advance!
We are planning an eventual move to P4000 storage and multi-site clustering. I have some questions regarding how to configure Flex 10 and VMware to P4000 storage.
We currently run:
C7000 and Flex10 (also fc modules)
BL460c G6 running ESXi 4.1
EVA4400
Two sites with identical hardware setup
Currently we have a LACP trunk with 3 cables per flex10 module for networking. Connection to EVA4400 is of course via fc modules.
For iscsi switches are planning with dedicated and redundant switches, 2 per site. 1 GbE ports on both switches and P4000 interfaces.
How would you recommed that I configure flex 10 and connect to iscsi switches?
I've seen 1 connection per Flex10-module to each switch. But I don't know if they had 10 GbE switches.
How would you recommend that the connection will be made?
What about the current LACP trunks? My plan is to leave those as they are.
Also, I plan to create 6 or 8 flexnics per servers and configure those for vSwitches in VMware. That according to Best Practices.
Any tips, suggestions regarding this?
Thanks in advance!
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2011 09:22 AM
02-22-2011 09:22 AM
Re: Flex-10 and VMware with P4000
It's actually a design best practice to configure only 1 vSwitch where/when possible.
You can still have the 6 or 8 nic's... and even dumb down the speed on 1 or 2 of them for management, but avSwitch is VERY flexible when it comes to assigning nics to different things.
I would probably have at least 1 connection to each vc module from each of your redundant switches (READ: 4 ports needed minimum.)
You will never achieve anything better than 1GB since you can not aggregate links across modules. you might be better off having redundant switch 1 connect to VC module 1 with 2 links... and switch 2 to vc module 2 with 2 links... so at least you have a 2GB redundant link.
If the LACP trunks are working for you now, then i'd probably let it be and play with it. Without knoing exactly how your nics are configured now, I can't offer much more.
Steven
You can still have the 6 or 8 nic's... and even dumb down the speed on 1 or 2 of them for management, but avSwitch is VERY flexible when it comes to assigning nics to different things.
I would probably have at least 1 connection to each vc module from each of your redundant switches (READ: 4 ports needed minimum.)
You will never achieve anything better than 1GB since you can not aggregate links across modules. you might be better off having redundant switch 1 connect to VC module 1 with 2 links... and switch 2 to vc module 2 with 2 links... so at least you have a 2GB redundant link.
If the LACP trunks are working for you now, then i'd probably let it be and play with it. Without knoing exactly how your nics are configured now, I can't offer much more.
Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP