- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: LUN Creation on VA7410
Disk Enclosures
1753701
Members
4731
Online
108799
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2004 12:54 AM
04-02-2004 12:54 AM
LUN Creation on VA7410
Is there a recommend LUN size for the VA7410? To be more specific are 100GB more efficient than a 50 GB?
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-02-2004 02:42 AM
04-02-2004 02:42 AM
Re: LUN Creation on VA7410
In general, the size of the LUN has no affect on performance for the VA series -- the beauty of virtualization. The VA distributes the capacity of every LUN across all the disks in the RAID Group (RG). The VA74x0 has two RGs, the VA71x0 has one RG. The maximum performance of an RG can be demonstrated with just a single LUN.
That being said, the OSs are a different matter. Predominantly sequential workloads are typically not an issue, however small-transfer, random workloads (which are typical in all systems except those dedicated to data warehouse applications) need special attention.
To achieve the best performance, the array must be presented with a workload that has sufficient concurrency â i.e., lots of simultaneous IOs to keep the disks busy. Keeping the disk busy is the key; hence the level on concurrency is relative to the number of disks in the array.
The factors that control concurrency are the application and the OS device queuing.
First the queuing. Windows does a good job at managing the device queues, so it does not need additional attention. However HPUX is a different story. Unfortunately the default queue depth for a LUN on HPUX is 8. This is typically sufficient to keep 4 to 6 disks busy! So, if you have more disks than 6 disks in your array you need to do something. Either use more LUNs (and stripe them with LVM â that is important) or manually increase the queue depth of each LUN (use the scsictl command).
Now the application. I canâ t speak to all application, but I give an example. For Window, the â drag and dropâ (drag a folder from one disk to another), and for HPUX the dd (and others like it) disk-to-disk copy are single threaded. IOs are sequential; the first IO must complete before the next IO is launched. These applications do not create sufficient concurrency. What to do? Run multiple copies simultaneously â multiple folders or multiple dds at one time.
Get it? One LUN is enough for the VA â but you must manage the application and
That being said, the OSs are a different matter. Predominantly sequential workloads are typically not an issue, however small-transfer, random workloads (which are typical in all systems except those dedicated to data warehouse applications) need special attention.
To achieve the best performance, the array must be presented with a workload that has sufficient concurrency â i.e., lots of simultaneous IOs to keep the disks busy. Keeping the disk busy is the key; hence the level on concurrency is relative to the number of disks in the array.
The factors that control concurrency are the application and the OS device queuing.
First the queuing. Windows does a good job at managing the device queues, so it does not need additional attention. However HPUX is a different story. Unfortunately the default queue depth for a LUN on HPUX is 8. This is typically sufficient to keep 4 to 6 disks busy! So, if you have more disks than 6 disks in your array you need to do something. Either use more LUNs (and stripe them with LVM â that is important) or manually increase the queue depth of each LUN (use the scsictl command).
Now the application. I canâ t speak to all application, but I give an example. For Window, the â drag and dropâ (drag a folder from one disk to another), and for HPUX the dd (and others like it) disk-to-disk copy are single threaded. IOs are sequential; the first IO must complete before the next IO is launched. These applications do not create sufficient concurrency. What to do? Run multiple copies simultaneously â multiple folders or multiple dds at one time.
Get it? One LUN is enough for the VA â but you must manage the application and
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP