- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Queue depth during a vmware migrate
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2012 08:53 AM
09-14-2012 08:53 AM
Hello,
Not sure if anyone can tell me if this is something I need to fix on the san or on my esx hosts. When we want to migrate a guest vm onto the san the initial start of the migrate causeses really high QDT on the san. When I start migrating the vm it starts off really slow, stalling at a small percent, say 3% or 6%, and the QDT stays really high for a while, then after 10 minutes or so the QDT drops down to normal levels and the migration continues and all is fine.
This is on a 2 node p4500 g2. The normal QDT we have is anywhere betwen 2 and 10, but when I try migrating the vm it goes up to 148!
Esx version 4.x and saniq version 9.0
Thanks,
Dan.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2012 11:59 AM
09-14-2012 11:59 AM
Re: Queue depth during a vmware migrate
What is background total load (from all machines, VMs, etc) in IOPS on cluster? Disks are 7.2 or 15 krpm?
Gediminas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2012 12:42 PM - edited 09-14-2012 03:43 PM
09-14-2012 12:42 PM - edited 09-14-2012 03:43 PM
Re: Queue depth during a vmware migrate
Current normal usage:
200-400 IOPS on 15K 600GB SAS drives.
When I start a migrate to the san it jumps to 14000 IOPS and a Queue Depth of about 120 and sometimes drops to 2.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2012 06:33 PM
09-14-2012 06:33 PM
SolutionYou may want to check several things -
1. Are the hosts created on LHN with Load Balancing enabled? If not, enable it.
2. Where are all the iSCSI sessions are running? Are they load balance between 2 nodes? You can check this by clicking on the Cluster and right hand side pane should have iSCSI session tab.
3. If using Active/Passive NIC bonding, are the active NICs on the same switch? It's preferred to have them on the same switch so that volume mirroring doesn't lag by going across the ISL between the switches. (Assuming you have 2 switches ISL).
4. Do you have Flow-Control enabled on all the ports (ESX and Lefthand NICs)?
5. Make sure you don't have jumbo frames enabled for partial setup. Either it has to be completely switched off or should be enabled for ESX, Network switches and Lefthand. (Check the ESX vmnic frame size using esxcfg-vswitch -l command.)
6. On both Lefthand nodes run the diagnostic test, all should pass specially the BBU. that might impact the performance big time.
Apart from this, you may try -
1. Use vMotion on a different datastore volume, which has different block size (1MB, 2MB, 4MB & 8MB block sizes are available for ESX4.x datastores for different VMDK sizes allowed). See if that makes difference.
2. Upgrade to SANiQ9.5, I have personally seen performance improvements (not very drastic though) with SANiQ9.5 so it will be worth giving a shot.
3. Reset the counter on network ports for lefthand before starting vMotion and check it after that, sometimes NIC port errors might be causing slow performance due to errors which mirroring the data across. (not a very likely cause but doesn't hurt checking it).
Please let us know the results.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2012 06:52 PM
09-14-2012 06:52 PM
Re: Queue depth during a vmware migrate
1. Yes
2. Yes the are balanced.
3. Not using active/passive
4. No it looks like not on the switchports connected to the esx hosts.
5. Don't use jumbo frames.
6. Will have to try that.
Looks like I will have to enable flow control.
Thanks for the detailed post.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-15-2012 10:58 PM
09-15-2012 10:58 PM
Re: Queue depth during a vmware migrate
Sometimes I really think HP should include a manual in the box with FAQ for these units...
Flow control makes such a huge difference to the performance of them for whatever reason - and it is not clearly documented (in BOLD )and no one normally reads the manuals anyway :)
David Tocker