- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- NIC bond
-
- Forums
-
Blogs
- Alliances
- Around the Storage Block
- Behind the scenes @ Labs
- HPE Careers
- HPE Storage Tech Insiders
- Infrastructure Insights
- Inspiring Progress
- Internet of Things (IoT)
- My Learning Certification
- OEM Solutions
- Servers: The Right Compute
- Shifting to Software-Defined
- Telecom IQ
- Transforming IT
- Infrastructure Solutions German
- L’Avenir de l’IT
- IT e Trasformazione Digitale
- Enterprise Topics
- ИТ для нового стиля бизнеса
- Blogs
-
Quick Links
- Community
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Contact
- Email us
- Tell us what you think
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Enterprise.nxt
- Marketplace
- Aruba Airheads Community
-
Forums
-
Blogs
-
InformationEnglish
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
06-28-2012 07:43 AM
06-28-2012 07:43 AM
I have 2 new nodes P4500G2 14.4 with 1G ports that i will be using for backup purposes. Can I bond 4 ports together, connect them to one swith to achive 4GBs? or I can do only 2Gbs maximum since port bond is per node?
Solved! Go to Solution.
- Tags:
- NIC
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
06-28-2012 10:57 AM
06-28-2012 10:57 AM
Re: NIC bond
bonding is per node, but as long as your backup computer is using MPIO and the HP DSM, it will automatically use both nodes so you can get the full 4GB bandwidth... this is assuming your backup computer also has at least 4GB bandwidth to the switch.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
06-28-2012 05:08 PM
06-28-2012 05:08 PM
Re: NIC bond
the backup server is a VM which is using running on the blade server and utilizing 10G traffic. will it see all 4GB though?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
06-29-2012 10:55 AM
06-29-2012 10:55 AM
Solutionneed more info... is the backup VM a windows based OS? Does it access the LUNs directly or does the data go through the host getting backed up?
I know for M$'s DPM server, the Hyper-V hosts w/ the VMs getting backed up take use the application aware snapshot feature to take a snapshot on the VSA system and then the hyper-v host reads the data over the iSCSI network and then sends it to the DPM server over a different network (typically after compressing it).
I'm a M$ shop so I don't really know about the esx details, but I think that there are issues with bonding for it and I think the best you might be able to do is 2GB in that situation.
As a side note, I'm not sure if the DSM solution which technically opens up 4GB to the host will actual give you 4GB since all data has to be replicated to each node which means that if you are saturating 1GB host->target, the VSAs need 1GB VSA->VSA to mirror that information. It might not be that bad as I don't have the network setup to test that idea. I think a 2-node cluster has about the same performance between HP DSM on M$ and the MPIO on esx, but once you get into 3+ nodes the HP DSM really starts to shine. There is a lot of good reading on the subject if you read the DSM documents.
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2019 Hewlett Packard Enterprise Development LP