- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- HP-UX Kernel Configuration on Record-Breaking SPEC...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2004 01:58 AM
11-04-2004 01:58 AM
HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
dbc_min_pct 0
dbc_max_pct 0
swapmem_on 0
Why would this be? Does it mean that all TPC storage is RAW?
Still reading the full disclosures...
Also, the PA8800 based systems do not seem to be listed on the TPC website. Does this mean the Itanium servers are in orders of magnitude faster than the PARISC dualies?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2004 02:06 AM
11-04-2004 02:06 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
I was reading the client configuration section. The clients were HP-UX boxen as well.
The server indeed has kernel tunable changes for caching and bot are set at 3%.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-04-2004 02:35 AM
11-04-2004 02:35 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
"We use 70 data arrays and 4 log arrays. Every array is attached to the
system via a Fibre Channel link.
Each of the 70 data array contains 30 36.4GB disk drives, allocated to 7
RAID1 LUNs. After formatting and mirroring, the available capacity of the
arrays is 491.149GB.
All LUNs on the data arrays numbered 0, 1, 2, 3, 4, 5 and 6 are used as raw
files. Each is 15.627GB. All LUNs numbered 7 are grouped together in the
volume group vgdata1, with a size of 90GB per array. All the Oracle files
on LUN 7 are logical volumes that are striped 70-way across all the data
arrays."
This system is using LVM (and all using RAW Oracle Storage). I am particularly intrigued by :
"All the Oracle files
on LUN 7 are logical volumes that are striped 70-way across all the data
arrays."
Does this mean the logical volumes in this case has 70 COLUMNS (70-way)? Which means that for reads or writes -- All 70 array's LUN 7 are all engaged at once? I wonder what stripe-size (or stripe width) is used? Normally I never exceed 8-Way Stripes.. with each member LUN on a different HBA and array.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2004 02:06 PM
11-08-2004 02:06 PM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2004 04:25 PM
11-08-2004 04:25 PM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
The servers are awesome, don't get me wrong and I'd encourage anyone to use the systems, just not the configurations they use for benchmarking.
Because HP is moving to Itanium, they are doing their performance testing on the Itanium platform. Again, if PA-RISC meets your needs there is no reason why to not use such systems.
Its not about finding your system on some chart in a magazine, its about your system doing what its intended to do as efficiently as it can.
As to your original question, about PA-RISC being slower or faster than Itanium, I would so no. The latest PA-RISC architecture machines are not an order of magnitiude slower.
Now when someone reads this post a few years from now with HP keeping its pledge not to do more PA-RISC systems after the next series, the 8900 series, then that statement might be true. That's the future and now is now.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2004 09:13 PM
11-08-2004 09:13 PM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
As for the TPM/C it was just the same with Informix. They once attached 1400 disks to a server (somehow) and only used the outer 25% of each one. Not real-world.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2004 01:02 AM
11-09-2004 01:02 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2004 01:20 AM
11-09-2004 01:20 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
With VxVM, I can build layered volumes -- which are essentialy stripes of stripes (which I've never gone beyond 8-way with each LUN on a different HBA and each LUN coming from a different array) or RAIDS of stripes so I can engage all the LUNs on an array.
You ask.. is there ever such an environment that needs so much of an I/O requirement and are these reference benchmarks really refelctive of "real world" scenarios? I think so. And are there really environments out there that need that many number of arrays and I/O connections? Absolutely yes .. that is why the battle these days is no longer on CPU prowess but on BUS and throughput design. One such environment are in the BioTech industry (ie. Gene Sequencing operations) and simulations which deal with very large datasets and high I/O throughput requirements.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2004 01:31 AM
11-09-2004 01:31 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
Very interesting thread Nelson, a real mind expander.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2004 01:40 AM
11-09-2004 01:40 AM
Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks
With respect to the pa8800 and mx2, Superdome with 128 processors of either type can easily beat the current Itanium 2 based Integrity Superdome numbers that are based on 64 processors. These benchmarks are obviously very expensive to run and very time consuming to optimize. HP just published a new TPC-H result, and there are other important benchmarks that need to be run. It doesn't make sense to put all the benchmarking resources on a single test. I think TPC-H might be a closer to reality test for today's use of these large systems than TPC-C.