- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Performance issue with Continuous Access
Disk Enclosures
1752637
Members
5799
Online
108788
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-13-2009 08:25 AM
тАО01-13-2009 08:25 AM
Performance issue with Continuous Access
Hi,
We have two EVA8100 with 96 x 450GB/15k FC drives each setup using only one Disk Group. So each EVA8100 has one Disk Group of 96 disks.
We have a 4GB ISL between the local and remote site switches. The distance from local to remote site is about 5km.
I've setup a CA synchronous replication for a 500GB Vdisk between the two EVA8100.
As soon as the full copy started, we noticed a major I/O performance issue on other Vdisks (which are not replicated with CA) on the source EVA.
Why is it so and how to prevent this is the future?
Thanks
We have two EVA8100 with 96 x 450GB/15k FC drives each setup using only one Disk Group. So each EVA8100 has one Disk Group of 96 disks.
We have a 4GB ISL between the local and remote site switches. The distance from local to remote site is about 5km.
I've setup a CA synchronous replication for a 500GB Vdisk between the two EVA8100.
As soon as the full copy started, we noticed a major I/O performance issue on other Vdisks (which are not replicated with CA) on the source EVA.
Why is it so and how to prevent this is the future?
Thanks
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2009 07:12 AM
тАО01-16-2009 07:12 AM
Re: Performance issue with Continuous Access
your 500GB vdisk will span the physical spindles of the drives in the group, as will ALL of your remaining vdisks, the I/O is therefore split between the vdisks, if you wanted to combat this then put some disks in another group for the CA volume (leaving a good overhead ) and run it on it's own. For performance and resiliance I would also create a minimum disk group (with overhead) for the system's LUN0 to live on too.
_________________________________________________
How to assign points on this new forum? Click the Kudos Star!
How to assign points on this new forum? Click the Kudos Star!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2009 07:20 AM
тАО01-16-2009 07:20 AM
Re: Performance issue with Continuous Access
Possibly there could be two reasons either disks spindles are being bottleneck or the controller is busy.
1. If you just created this 500GB Vdisk and running the full sync, there is a possibility of diskgroup releveling and also read intensive sync running which will compete with other hosts (specially if they are also read intensive and worse if they have random IO pattern).
2. CA replication may overload the HSV controllers hence you need to check for HSV load and also if they are balanced.
Use EVAPerf and see which resources are overloaded, you may hit bingo on disks or controller.
1. If you just created this 500GB Vdisk and running the full sync, there is a possibility of diskgroup releveling and also read intensive sync running which will compete with other hosts (specially if they are also read intensive and worse if they have random IO pattern).
2. CA replication may overload the HSV controllers hence you need to check for HSV load and also if they are balanced.
Use EVAPerf and see which resources are overloaded, you may hit bingo on disks or controller.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-16-2009 11:41 AM
тАО01-16-2009 11:41 AM
Re: Performance issue with Continuous Access
> for the system's LUN0 to live on too.
The EVA's LUN_Z is a controller [type 12(10)] LUN. It does not use a virtual disk. Even if it did - creating a separate disk group would waste^H^H^H^H^Hrequire a minimum of 8(!) disk drives.
On the other hand, there is at least one disk drive per disk group (called the 'quorum disk') which holds the EVA's meta-data. This is fully automatic with no control by the user.
The EVA's LUN_Z is a controller [type 12(10)] LUN. It does not use a virtual disk. Even if it did - creating a separate disk group would waste^H^H^H^H^Hrequire a minimum of 8(!) disk drives.
On the other hand, there is at least one disk drive per disk group (called the 'quorum disk') which holds the EVA's meta-data. This is fully automatic with no control by the user.
.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP