- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Performance - 1gbps vs. 10gbps NICs?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-14-2010 11:35 AM
тАО09-14-2010 11:35 AM
Performance - 1gbps vs. 10gbps NICs?
HP StorageWorks P4500 G2 Virtualization SAN Solution BQ888A as it seems to offer a good mix of speed, capacity, and if I add an AX701A I get the capacity I need at my primary site plus a bundle of VSA licenses so I can do a "cheap" DR site.
My question concerns iSCSI performance. Rightly or wrongly I'm wary as I get the impression that it's much easier to mess up iSCSI than it is, say, Fibre Channel.
My question is, would getting the 10gbps upgrade kit for the SAN units be beneficial simply because it's less connections to have to balance over?
I'm already thinking that with my vSphere hosts it would make sense to have a single 10gbps pNIC as primary in my vSwitch's with a 1gbps purely for backup?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-15-2010 12:14 PM
тАО09-15-2010 12:14 PM
Re: Performance - 1gbps vs. 10gbps NICs?
My setup is 6 P4500's with 12 145GB 15k rpm SAS drives. Connected to 5 VSphere 4 servers running about 80 server VM's.
I'd suggest going with the 1G's and testing them out in your environment before taking the 10G jump.
Hope that helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-16-2010 03:06 AM
тАО09-16-2010 03:06 AM
Re: Performance - 1gbps vs. 10gbps NICs?
But as more and more 10Gbps kit comes out and the price drops it'll come the standard, till then we've not seen any performance problems on our P4500.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-16-2010 06:16 AM
тАО09-16-2010 06:16 AM
Re: Performance - 1gbps vs. 10gbps NICs?
I have a three node P4500 that hosts about 70 VM's. On average i see it running between 50-100 Mbps. When we run backups of our Exchange environment we see it jump higher up towards 500 Mbps without any degradation in performance for the rest of our VM's.
That being said - in test environment's, with ALB, using Robocopy as an example i've been able to push a clients P4300 ML SATA san up close to 950 Mbps - very impressive. I wouldn't get to go beyond that because of the host limitation.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-16-2010 09:46 AM
тАО09-16-2010 09:46 AM
Re: Performance - 1gbps vs. 10gbps NICs?
Just to clarify, when I said my concern was performance, it's not because I doubt that multiple 1gbps are sufficient, more than with (say) 3 nodes of 15k SAS P4500 and 2 ESX boxes, that's a lot of NICs flying about and a lot of things to get right so far as switch config and ESX multi-pathing and multi-pathing from within the guest if I wanted to do app aware snapshots of things like Exchange and SQL.