Operating System - HP-UX
1825775 Members
1908 Online
109687 Solutions
New Discussion

HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

 
Alzhy
Honored Contributor

HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

I took a peek at the full disclosure document of the HP Intergity SuperDome's million-TPMC peformance on the www.spec.org and I notice that the environment did not have any caching enabled at all!

dbc_min_pct 0
dbc_max_pct 0
swapmem_on 0

Why would this be? Does it mean that all TPC storage is RAW?

Still reading the full disclosures...

Also, the PA8800 based systems do not seem to be listed on the TPC website. Does this mean the Itanium servers are in orders of magnitude faster than the PARISC dualies?
Hakuna Matata.
9 REPLIES 9
Alzhy
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

Scratch the caching question..

I was reading the client configuration section. The clients were HP-UX boxen as well.

The server indeed has kernel tunable changes for caching and bot are set at 3%.
Hakuna Matata.
Alzhy
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

I was interested on the Storage Configuration as well (I've done VERY LARGE storage config with VxVM, Solaris and multiple HDS Arrays before). And I found the following statement:

"We use 70 data arrays and 4 log arrays. Every array is attached to the
system via a Fibre Channel link.
Each of the 70 data array contains 30 36.4GB disk drives, allocated to 7
RAID1 LUNs. After formatting and mirroring, the available capacity of the
arrays is 491.149GB.
All LUNs on the data arrays numbered 0, 1, 2, 3, 4, 5 and 6 are used as raw
files. Each is 15.627GB. All LUNs numbered 7 are grouped together in the
volume group vgdata1, with a size of 90GB per array. All the Oracle files
on LUN 7 are logical volumes that are striped 70-way across all the data
arrays."

This system is using LVM (and all using RAW Oracle Storage). I am particularly intrigued by :

"All the Oracle files
on LUN 7 are logical volumes that are striped 70-way across all the data
arrays."

Does this mean the logical volumes in this case has 70 COLUMNS (70-way)? Which means that for reads or writes -- All 70 array's LUN 7 are all engaged at once? I wonder what stripe-size (or stripe width) is used? Normally I never exceed 8-Way Stripes.. with each member LUN on a different HBA and array.

Hakuna Matata.
Ted Buis
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

Look at the number of users. Over a million right? Anyone doing OLTP with a million users? Not sure how realistic these configurations are anymore, so I'm not sure how much you would really want to get out of what they did to get these large numbers. I once heard that for evey 100,000 tpmC you need to do 35000 IOPS. So, for over a million you would need 350,000+ IOPS. You have to work hard to spreadout that many I/O to avoid hot spots too. I think they only use a fraction of the total storage that exists in these configurations. The TPC-H might be a more realistic senario these days, or some of the smaller TPC-C like the rx2600.
Mom 6
Steven E. Protter
Exalted Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

The nail has been hit on the head right here. These performance tests are not necessarily real world. Nobody runs their systems like they do in these tests which are intended to generate marketing information.

The servers are awesome, don't get me wrong and I'd encourage anyone to use the systems, just not the configurations they use for benchmarking.

Because HP is moving to Itanium, they are doing their performance testing on the Itanium platform. Again, if PA-RISC meets your needs there is no reason why to not use such systems.

Its not about finding your system on some chart in a magazine, its about your system doing what its intended to do as efficiently as it can.

As to your original question, about PA-RISC being slower or faster than Itanium, I would so no. The latest PA-RISC architecture machines are not an order of magnitiude slower.

Now when someone reads this post a few years from now with HP keeping its pledge not to do more PA-RISC systems after the next series, the 8900 series, then that statement might be true. That's the future and now is now.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Steve Lewis
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

I am surprised could find 70 disk arrays. I am also surprised that they could attach them all at once. It makes a mockery of my statements the other day about bandwidth limits.
As for the TPM/C it was just the same with Informix. They once attached 1400 disks to a server (somehow) and only used the outer 25% of each one. Not real-world.


Ted Buis
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

Since Integrity or pa-risc Superdomes can support up to 192 I/O cards (16 cells with 12 I/O slots per cell), there is plenty of physical room to connect disk arrays. Now with the dual port FC cards, there is even more capability.
Mom 6
Alzhy
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

The reason for the post was actually curiosity as to how the TPC tests were set up on the Integrity Superdomes. I was even more intrigued at how they build certain volumes -- 70-way hostbased stripes of the VA LUNs? And simply with LVM?

With VxVM, I can build layered volumes -- which are essentialy stripes of stripes (which I've never gone beyond 8-way with each LUN on a different HBA and each LUN coming from a different array) or RAIDS of stripes so I can engage all the LUNs on an array.

You ask.. is there ever such an environment that needs so much of an I/O requirement and are these reference benchmarks really refelctive of "real world" scenarios? I think so. And are there really environments out there that need that many number of arrays and I/O connections? Absolutely yes .. that is why the battle these days is no longer on CPU prowess but on BUS and throughput design. One such environment are in the BioTech industry (ie. Gene Sequencing operations) and simulations which deal with very large datasets and high I/O throughput requirements.

Hakuna Matata.
Steven E. Protter
Exalted Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

Disk is always the bottleneck. Any way you can improve that throughput will provide more benefit than esoteric kernel configurations that your appications would never tolerate anyway.

Very interesting thread Nelson, a real mind expander.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Ted Buis
Honored Contributor

Re: HP-UX Kernel Configuration on Record-Breaking SPEC TPC Benchmarks

I agree that there are requirements for very high I/O requirements, but the TPC-C is quite different from TPC-H, which is also quite different from some of the high performance technical computing areas where HP is has done work with the "Lustre" solution with LINUX clusters. I applaud your looking at these reports for ideas, but I think it could be very hard extract useful information without much testing for your particular environment.

With respect to the pa8800 and mx2, Superdome with 128 processors of either type can easily beat the current Itanium 2 based Integrity Superdome numbers that are based on 64 processors. These benchmarks are obviously very expensive to run and very time consuming to optimize. HP just published a new TPC-H result, and there are other important benchmarks that need to be run. It doesn't make sense to put all the benchmarking resources on a single test. I think TPC-H might be a closer to reality test for today's use of these large systems than TPC-C.
Mom 6