- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- virtual connect configuration with Single c7000 fo...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-06-2011 07:37 AM
04-06-2011 07:37 AM
virtual connect configuration with Single c7000 for both Vsphere and non virtual environment
We have single c7000 chassis with 5 bl460c g6 blades
2x virtual connect flex10 ethernet module
2x mds9124 SAN switch
how should we configure Virtual connect keeping in future scalibility
does below config correct for both esx1 and non esx environment
There are 2 x 10 Gig card in each blade and we are using 3 blade for
ESX servers and 2 for Non ESX physical blade
Each 10 gig port on the server has four logical ports and two 10 gig ports are on each server.
It means two 10 gig port can divide on eight logical ports.
There are four vlan for esx servers.
- Service console VLAN,
- production VLAN,
- vmotion VLAN and
- backup VLAN
- LOM1a and LOM2a for service console only
- LOM1b and LOM2b for Production
- LOM1c and LOM2c for vmotion
- LOM1d and LOM2d for backup
- for production keep it for minimum 4GB
- you can set for service console 200MB
- you can set 1 GB for vmotion
- you can set 2GB for backup
From the Core switch end connect either ports from a switch to uplink ports of VC in bay1 & Bay2
LACP need to configured for ports on the switch end.
For Active –Active VC configuration with Mapped Vlan’s we need to configure LOMs
- Select Map Vlan Tags under Ethernet settings.
- Create a shared uplink set Vlan_trunk1 for VC in bay1
- Add uplink ports x1,x2 from VC in bay1
- Add production-1 network –specify Vlan ID-2
- Add Vmotion-1 network – specify Vlan ID-3
- Create a shared uplink set Vlan_trunk2 for VC in bay2
- Add uplink ports x1,x2 from VC in bay2
- Add production-2 network – specify vlanid -2
- Add Vmotion-2 network – specify vlanid -2
- Create a server profile Bay1
- Add 6 more network connection as only two will appear by default.
- Apply profile to Bay1
- Edit the profile & now LOM description will appear in front of ports connected.
- Assign LOM1a to Production-1 network and LOM2a to production-2
- Assign LOM1b to Vmotion-1 network and LOM2b to Vmotion-2
- Add network Data Backup – specify VlaniD -4
- Add network Management – specify Vlanid -5
- Assign LOM1c & LOM2c for Backup
- Assign Lom1d &LOM2d for Management
- We can configure non esx server with same uplink or make it different uplink for non esx servers.
- if we need to configure Fault tolerence and Deployment VLAN then what should be the best practice to configure Virtual connect
need to configure Virtual connect for ESXi 4.1 server and Non esx blades within same c7000 chassis
we have 2xethernet virtual connect
2xMDS9124 SAN switch
we have total of 5 Blades BL460c G6 3
3xBlades will be sused for ESXi and 2 for NON esx
is the configuration fine for Virtual Connect for both (ESX and Non ESX server) steps as per HP best practice
There are 2 x 10 Gig card in each blade and we are using 3 blade for
ESX servers and 2 for Non ESX physical blade
Each 10 gig port on the server has four logical ports and two 10 gig ports are on each server.
It means two 10 gig port can divide on eight logical ports.
There are four vlan for esx servers.
Ø
Service console VLAN,
Ø
production VLAN,
Ø
vmotion VLAN and
Ø
backup VLAN
Ø
LOM1a and LOM2a for service console only
Ø
LOM1b and LOM2b for Production
Ø
LOM1c and LOM2c for vmotion
Ø
LOM1d and LOM2d for backup
Ø
production keep it for minimum 4GB
Ø
service console 200MB
Ø
Vmotion 1 GB
Ø
Backup 2GB
From the Core switch end connect either ports from a switch to uplink ports of VC in bay1 & Bay2
LACP need to configured for ports on the switch end.
For Active –Active VC configuration with Mapped Vlan’s we need to configure LOMs
1. Select Map Vlan Tags under Ethernet settings.
2. Create a shared uplink set Vlan_trunk1 for VC in bay1
3. Add uplink ports x1,x2 from VC in bay1
4. Add production-1 network –specify Vlan ID-2
5. Add Vmotion-1 network – specify Vlan ID-3
6. Create a shared uplink set Vlan_trunk2 for VC in bay2
7. Add uplink ports x1,x2 from VC in bay2
8. Add production-2 network – specify vlanid -2
9. Add Vmotion-2 network – specify vlanid -2
10. Create a server profile Bay1
11. Add 6 more network connection as only two will appear by default.
12. Apply profile to Bay1
13. Edit the profile & now LOM description will appear in front of ports connected.
14. Assign LOM1a to Production-1 network and LOM2a to production-2
15. Assign LOM1b to Vmotion-1 network and LOM2b to Vmotion-2
16.
Add network Data Backup – specify VlaniD -4
17.
Add network Management – specify Vlanid -5
18.
Assign LOM1c & LOM2c for Backup
19.
Assign Lom1d &LOM2d for Management
We can configure non esx server with same VC
20. how should
21 if we need to provision Fault Tolerance and Deployment VLAN then what should be the configuration