- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- Operating System - Tru64 Unix
- >
- AdvFS Performance Question
Operating System - Tru64 Unix
1752264
Members
4588
Online
108786
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-23-2005 04:37 AM
тАО02-23-2005 04:37 AM
I've got an embedded database running on Tru64 5.1B / PK4. The storage is an EVA3000 array. Our main data volume is comprised of 5 100 GB VRAID1 virtual disks. We have bumped up the tag queue depth for HSV disks to 100. The five virtual disks make up a single AdvFS domain. Given that our load is ~ 90% read / 10% write is there anything to be gained by making five separate AdvFS domains? Any AdvFS-level data structures that would create a performance improvement from being in parallel?
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2005 08:36 AM
тАО04-01-2005 08:36 AM
Solution
Hi Mark,
Given the current use of the domain, using 5 separate domains will not gain you any performance. However, if your load were reversed (90% write/10% read), you would gain a performance boast due to using 5 separate logs, one on each volume. The log is not used nearly as much with 90% reads. There are no other domain-wide AdvFS data structures that would give you a performance boast by using multiple domains.
Using multiple volumes, as you are, is your best bet for the best performance in your domain. This allows parallel reads across volumes.
One thing you could look into is how your "hot files" are balanced across the 5 volumes. If there are a few large files that are used all the time (your "hot files"), you want to spread these across volumes. Alternatively, you can manually spread extents of a large file across all the volumes in the domain using the "migrate" command. In effect, this is manually striping your file.
To find your hot files, use the "vfast -L hotfiles" command. You can also check I/O activity on your devices using the "iostat" and "advfsstat" commands. There are many options for these utilities that are explained in the manpages.
Lisa Smith
Tru64 AdvFS Engineering
Given the current use of the domain, using 5 separate domains will not gain you any performance. However, if your load were reversed (90% write/10% read), you would gain a performance boast due to using 5 separate logs, one on each volume. The log is not used nearly as much with 90% reads. There are no other domain-wide AdvFS data structures that would give you a performance boast by using multiple domains.
Using multiple volumes, as you are, is your best bet for the best performance in your domain. This allows parallel reads across volumes.
One thing you could look into is how your "hot files" are balanced across the 5 volumes. If there are a few large files that are used all the time (your "hot files"), you want to spread these across volumes. Alternatively, you can manually spread extents of a large file across all the volumes in the domain using the "migrate" command. In effect, this is manually striping your file.
To find your hot files, use the "vfast -L hotfiles
Lisa Smith
Tru64 AdvFS Engineering
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2005 08:58 AM
тАО04-01-2005 08:58 AM
Re: AdvFS Performance Question
Thank you Lisa!
We are pursuing striping our hot files across all the volumes, but our application vendor doesn't like the idea of de-fragging or migrating on the fly. (I know, I know) It's an embedded database called Cache from Intersystems.
We are pursuing striping our hot files across all the volumes, but our application vendor doesn't like the idea of de-fragging or migrating on the fly. (I know, I know) It's an embedded database called Cache from Intersystems.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP