- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- P4500 bandwidth utilization
StoreVirtual Storage
1752275
Members
4958
Online
108786
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-19-2011 02:08 AM
тАО05-19-2011 02:08 AM
I'm evaluating to implement a P4500 (2 node) in network raid 10. I guess (but didn't find any evidence) that all 4 NIC (1 Gbps each) will work (that is: my vmware farm will "talk" with both storage node in the same time) so I will leverage a total bandwidth of 4Gbps. Indeed my doubt is that the vmware farm could talk always with only 1 P4500 node while the working node synchronize with the other one. That would say that my vmware farm would talk at a max bandwidth of 2Gbps with the P4000 cluster.
Some evidence that my initial guess is correct?
Thanks!
Some evidence that my initial guess is correct?
Thanks!
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-19-2011 04:06 AM
тАО05-19-2011 04:06 AM
Solution
I don't think you'll ever see 100% utilization of all links as it stands. But to even get VMWare to talk to all nodes at the same time, you need to ensure that VMWare is using multipath. It takes a little bit to configure it correctly.
This is a good walkthrough of the setup:
http://virtualy-anything.blogspot.com/2009/12/how-to-configure-vsphere-mpio-for-iscsi.html
If you setup the round robin version, VMWare uses all paths to send a set amount of data to each node and runs through them all. Never really using them all at once...but still making use of each of them. There may be a time that HP makes a DSM for VMWare that could change that method, but as of now...nothing.
This is a good walkthrough of the setup:
http://virtualy-anything.blogspot.com/2009/12/how-to-configure-vsphere-mpio-for-iscsi.html
If you setup the round robin version, VMWare uses all paths to send a set amount of data to each node and runs through them all. Never really using them all at once...but still making use of each of them. There may be a time that HP makes a DSM for VMWare that could change that method, but as of now...nothing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-20-2011 01:45 AM
тАО05-20-2011 01:45 AM
Re: P4500 bandwidth utilization
I have implemented this procedure already several times and I can confirm that this solution is really working well... Load becomes spread accross all available paths...
Though, I am now struggeling with the fact that I moved to distributed switches inside ESX.
What is than the ideal configuration of the NICs in the ESX host? I suppose the 2 seperate VMkernels are not nocessary anymore if I select Load Balancing based on physical NIC load (new in ESX 4.1)?
At that moment, 1 VMkernel should be enough so that ESXi will spread the load anyway accross the 2 NICs?
I do not find any documentation on this one...
Though, I am now struggeling with the fact that I moved to distributed switches inside ESX.
What is than the ideal configuration of the NICs in the ESX host? I suppose the 2 seperate VMkernels are not nocessary anymore if I select Load Balancing based on physical NIC load (new in ESX 4.1)?
At that moment, 1 VMkernel should be enough so that ESXi will spread the load anyway accross the 2 NICs?
I do not find any documentation on this one...
--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
If my post was useful, clik on my KUDOS! "White Star" !
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP