- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: NIC bond
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-28-2012 07:43 AM
тАО06-28-2012 07:43 AM
I have 2 new nodes P4500G2 14.4 with 1G ports that i will be using for backup purposes. Can I bond 4 ports together, connect them to one swith to achive 4GBs? or I can do only 2Gbs maximum since port bond is per node?
Solved! Go to Solution.
- Tags:
- NIC
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-28-2012 10:57 AM
тАО06-28-2012 10:57 AM
Re: NIC bond
bonding is per node, but as long as your backup computer is using MPIO and the HP DSM, it will automatically use both nodes so you can get the full 4GB bandwidth... this is assuming your backup computer also has at least 4GB bandwidth to the switch.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-28-2012 05:08 PM
тАО06-28-2012 05:08 PM
Re: NIC bond
the backup server is a VM which is using running on the blade server and utilizing 10G traffic. will it see all 4GB though?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-29-2012 10:55 AM
тАО06-29-2012 10:55 AM
Solutionneed more info... is the backup VM a windows based OS? Does it access the LUNs directly or does the data go through the host getting backed up?
I know for M$'s DPM server, the Hyper-V hosts w/ the VMs getting backed up take use the application aware snapshot feature to take a snapshot on the VSA system and then the hyper-v host reads the data over the iSCSI network and then sends it to the DPM server over a different network (typically after compressing it).
I'm a M$ shop so I don't really know about the esx details, but I think that there are issues with bonding for it and I think the best you might be able to do is 2GB in that situation.
As a side note, I'm not sure if the DSM solution which technically opens up 4GB to the host will actual give you 4GB since all data has to be replicated to each node which means that if you are saturating 1GB host->target, the VSAs need 1GB VSA->VSA to mirror that information. It might not be that bad as I don't have the network setup to test that idea. I think a 2-node cluster has about the same performance between HP DSM on M$ and the MPIO on esx, but once you get into 3+ nodes the HP DSM really starts to shine. There is a lot of good reading on the subject if you read the DSM documents.