- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: VSA design and performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2013 10:02 AM
05-22-2013 10:02 AM
Re: VSA design and performance
Regarding splitting out the VM from the iSCSI are you guys using both eth0 and eth1 on the vsa for this? I'm feeling that I'm missing something here. I've disabled eth1 and have the VSA and the vmkernel ports on the same vswitch. Are you suggesting I change this? If so I assume they'd be on the same subnet? Here is a screen cap of my current vSwitch. Any pointers greatly appreciated!
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-15-2013 09:14 AM
08-15-2013 09:14 AM
Re: VSA design and performance
From my practice I would go with active/standby uplink config for VSA, it's more stable.
Also, almost in any case you won't get more than 1Gb. It is by design (1 working i/f at VSA, ESX iSCSI initiator going through 1 link, VMware load balancing algorithm etc).
Also remember that you will have replication traffic between VSAs (but 1Gb full duplex helps).
But this is virtual appliance. I thing 1Gb is quite ok for its normal tasks. Even 100Mb looks like to work )
Also I doubt you will have much more from 10Gb uplink. Taking 10Gb bandwidth from VM without root password and tuning is quite complex :).
I've written some notes on VSA here , probably will be useful ..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-23-2013 05:42 AM
08-23-2013 05:42 AM
Re: VSA design and performance
MY question is if you have two nodes, plus a 3rd for quorum, with esxi does the 3third node ever take requests and relay them? If so that would be a serious choke point since it has no data.
If esxi would only access two nodes, then vm's running on the same host as the VSA would have a 100% read hit rate locally but have to write to both nodes.?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-23-2013 08:00 AM
08-23-2013 08:00 AM
Re: VSA design and performance
assuming your third node is a FOM.... no it will not have any data requests from your initiators. Only nodes INSIDE the cluster will handle iSCSI requests and only for the LUNs inside that cluster.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2013 04:06 AM
08-29-2013 04:06 AM
Re: VSA design and performance
so two nodes would be faster with a vm on a VSA node since you have a 100% chance of reading all data from localhost and 100% chance of writing all data to localhost (but have to way for 2nd host to ACK write).
I'd be curious how this works out compared to using more nodes where the remote data percentage would go way up?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-29-2013 10:34 PM
08-29-2013 10:34 PM
Re: VSA design and performance
PS
To assign points on this post? Click the white Thumbs up below!
- « Previous
-
- 1
- 2
- Next »