- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- EMC Invista EVA6000 & EVA6100 Queue-Full
Disk Enclosures
1821587
Members
3448
Online
109633
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-23-2009 02:26 AM
10-23-2009 02:26 AM
EMC Invista EVA6000 & EVA6100 Queue-Full
Hi guys!
in our environment we run an storage-virtualization from emc named "invista" with 2 EVAs 6000 and 6100.
The EMC Solution based on 4 Cisco-FC-Switches with 2 intelligent Modules.
We have configured 7 virtual volumes on the invista. This will be mirrored to both eva's and presented agains the front-end (VMWare-Cluster).
Take a look at the attachment.
Everything is fine till the eva's get a high payload from the front-end.If it is, the invista get an "Queue-Full(28h)" and sometimes the configured mirrors goes down! In this case the invista starts an re-sync.
This operation does not speed up the performance of the virtual disk array and takes approximately 24 hours.
The invsita make round-robin to the back-end evas. So on hight I/O must be heavy load on the eva controller. In my opinion the eva controller have to route many I/O's through "mirror-port" to the "preferred copntroller" of the aim-volume. if there is to much I/O's the eva sends Queue-Full(28h). This trigger we only could get with an fiberchanel-analyzer.
So the protocol shows the fc-reply of the eva:
<1> <000:23:00.872_861_230> <1,1,3 / 1,1,4> <14>
Is there possibility to fix this issue?
- Setup the "Host Properties" on the EVA, like "Operating System" or else..?
- configure Controllers..?
in our environment we run an storage-virtualization from emc named "invista" with 2 EVAs 6000 and 6100.
The EMC Solution based on 4 Cisco-FC-Switches with 2 intelligent Modules.
We have configured 7 virtual volumes on the invista. This will be mirrored to both eva's and presented agains the front-end (VMWare-Cluster).
Take a look at the attachment.
Everything is fine till the eva's get a high payload from the front-end.If it is, the invista get an "Queue-Full(28h)" and sometimes the configured mirrors goes down! In this case the invista starts an re-sync.
This operation does not speed up the performance of the virtual disk array and takes approximately 24 hours.
The invsita make round-robin to the back-end evas. So on hight I/O must be heavy load on the eva controller. In my opinion the eva controller have to route many I/O's through "mirror-port" to the "preferred copntroller" of the aim-volume. if there is to much I/O's the eva sends Queue-Full(28h). This trigger we only could get with an fiberchanel-analyzer.
So the protocol shows the fc-reply of the eva:
<1>
Is there possibility to fix this issue?
- Setup the "Host Properties" on the EVA, like "Operating System" or else..?
- configure Controllers..?
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-30-2009 11:02 AM
10-30-2009 11:02 AM
Re: EMC Invista EVA6000 & EVA6100 Queue-Full
Just my 2 cents, but I would look at the configuration options on the invista. I doubt the mirror-port is the root of the problem, but I would think the invista could be configured to only use the ports on the perferred controller unless that controller went down.
If the write load is high and the primary EVA is faster than the secondary EVA (e.g. more spindles in the disk group) then it is easy to see that the secondary EVA may not be able to keep up. At this point the invista has three option (don't know if it supports these options or not)
1. stop writing to the mirror and resync later (best performance for application)
2. throttle the writes from the application to match the speed of the slowest mirror.
3. Buffer the writes to the mirror and send them later (async mirroring - of course this would require storage on the invista would it may not have)
If the write load is high and the primary EVA is faster than the secondary EVA (e.g. more spindles in the disk group) then it is easy to see that the secondary EVA may not be able to keep up. At this point the invista has three option (don't know if it supports these options or not)
1. stop writing to the mirror and resync later (best performance for application)
2. throttle the writes from the application to match the speed of the slowest mirror.
3. Buffer the writes to the mirror and send them later (async mirroring - of course this would require storage on the invista would it may not have)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP