<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Perf T'shooting - Large Linux DB Farm in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780223#M44221</link>
    <description>attached is a document about oracle tuning, check it out.</description>
    <pubDate>Fri, 22 Apr 2011 07:18:28 GMT</pubDate>
    <dc:creator>dirk dierickx</dc:creator>
    <dc:date>2011-04-22T07:18:28Z</dc:date>
    <item>
      <title>Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780221#M44219</link>
      <description>Large Oracle DB Farm (Support/DSS Environments). 256GB of RAM, 48-way X86_64, RHEL 5.6, hosting about 25 DB instances (various SGA sizes), ASMLib/ASM storage layout on high End Array, 10 FC Channels (effective separation of I/O channels  assumed). Whenever our clients start their test activities with onlyhalf odf the DBs engaged, the system crawls - not much I/O issue perceived, ample memory, ample swap -- but system practically crawls. mu&lt;BR /&gt;&lt;BR /&gt;Meminfo:&lt;BR /&gt;&lt;BR /&gt; # cat /proc/meminfo&lt;BR /&gt;MemTotal:     263637940 kB&lt;BR /&gt;MemFree:        588616 kB&lt;BR /&gt;Buffers:          3544 kB&lt;BR /&gt;Cached:       136354500 kB&lt;BR /&gt;SwapCached:     244740 kB&lt;BR /&gt;Active:       151189624 kB&lt;BR /&gt;Inactive:       148124 kB&lt;BR /&gt;HighTotal:           0 kB&lt;BR /&gt;HighFree:            0 kB&lt;BR /&gt;LowTotal:     263637940 kB&lt;BR /&gt;LowFree:        588616 kB&lt;BR /&gt;SwapTotal:    176125268 kB&lt;BR /&gt;SwapFree:     22138520 kB&lt;BR /&gt;Dirty:              24 kB&lt;BR /&gt;Writeback:           0 kB&lt;BR /&gt;AnonPages:    14876964 kB&lt;BR /&gt;Mapped:       136125920 kB&lt;BR /&gt;Slab:          2229836 kB&lt;BR /&gt;PageTables:   23221264 kB&lt;BR /&gt;NFS_Unstable:        0 kB&lt;BR /&gt;Bounce:              0 kB&lt;BR /&gt;CommitLimit:  265448236 kB&lt;BR /&gt;Committed_AS: 407476480 kB&lt;BR /&gt;VmallocTotal: 34359738367 kB&lt;BR /&gt;VmallocUsed:    552160 kB&lt;BR /&gt;VmallocChunk: 34359185507 kB&lt;BR /&gt;HugePages_Total: 41500&lt;BR /&gt;HugePages_Free:   6840&lt;BR /&gt;HugePages_Rsvd:   6758&lt;BR /&gt;Hugepagesize:     2048 kB&lt;BR /&gt;&lt;BR /&gt;VMSTAT:&lt;BR /&gt;&lt;BR /&gt;# vmstat 5 10&lt;BR /&gt;procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------&lt;BR /&gt; r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st&lt;BR /&gt;111  9 154000768 577728   4520 136483232    9    8  8063   201    1    0 10 12 65 13  0&lt;BR /&gt;134  8 153996016 575964   4740 136496080 5480  600 10630  1730 3646 47206  8 92  0  0  0&lt;BR /&gt;140  9 153996224 578332   4408 136497104 1826  388  4972  1207 1240 14632  7 93  0  0  0&lt;BR /&gt;159 11 153998688 583756   4200 136492864 2198  370  8512  1293 1180 16805  6 94  0  0  0&lt;BR /&gt;129  7 153998752 580988   4344 136496192 2880  173  8716  1186 1243 15501  5 95  0  0  0&lt;BR /&gt;135 15 153999168 583804   4260 136492928 1406  107  4415  1728 1096 15180  5 95  0  0  0&lt;BR /&gt;177  6 154002592 583172   4424 136493712 3333  229 16154  2335 2827 28490  5 94  0  0  0&lt;BR /&gt;209 11 154005152 578976   4380 136494016 1894   94  7613  6044 1569 16472  7 93  0  0  0&lt;BR /&gt;154  6 154008976 577332   4376 136491328 2873  327 14338  8224 1741 18215  6 94  0  0  0&lt;BR /&gt;212  7 154010400 579120   4240 136493280 2266  215  4201   960 1208 16502  6 94  0  0  0&lt;BR /&gt;&lt;BR /&gt;Our single Instance mega servers are doing just fine. Are there any recipes for scaling large linux servers to better handle these kind of workloads?&lt;BR /&gt;&lt;BR /&gt;Or with these vast number of DBs - it is not just possible.&lt;BR /&gt;&lt;BR /&gt;I am thinking of partitioning this server into perhaps 4 or 5 KVM or ESXi virtual machines to address this issue as it is likely RHEL is not meant to handle so large of interrupts and multipathed sessions (there are over 1400 multipath'ed devices on this server).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2011 13:06:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780221#M44219</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-21T13:06:19Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780222#M44220</link>
      <description>Some more observations:&lt;BR /&gt;&lt;BR /&gt;swap around 180GB allocated is 80% used.&lt;BR /&gt;kswapd0/01 are both cpu active&lt;BR /&gt;&lt;BR /&gt;Total SGA SHM segments allocated for the 25 Oracle DB instances ~ 413GB&lt;BR /&gt;&lt;BR /&gt;Hugemem is enabled per the above.&lt;BR /&gt;Hugmem Alloc is 41500x2MB ~ 82GB&lt;BR /&gt;&lt;BR /&gt;COuld we be we have massive memory trashing?&lt;BR /&gt;&lt;BR /&gt;How does the SGA fit in memory if HugePage Alloc is only at 80GB? Is this the reason swap is active? Should I increase my HugePages again to perhaps 90% of my 256GB of RAM?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 21 Apr 2011 13:34:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780222#M44220</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-21T13:34:32Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780223#M44221</link>
      <description>attached is a document about oracle tuning, check it out.</description>
      <pubDate>Fri, 22 Apr 2011 07:18:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780223#M44221</guid>
      <dc:creator>dirk dierickx</dc:creator>
      <dc:date>2011-04-22T07:18:28Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780224#M44222</link>
      <description>1) when you look at "ipcs -am" how many segments is Oracle using per instance?  Keep in mind that you want this number below 7. 1 is better.&lt;BR /&gt;&lt;BR /&gt;2) What is statspack telling you that you are spending time on during this period?&lt;BR /&gt; &lt;BR /&gt;3) in the above output (vmstat) - notice you're spending most of your cpu time on executing kernel code (92-94%).  This tells you're pretty busy. Swapping maybe. Memory accessing maybe, or both. &lt;BR /&gt;&lt;BR /&gt;4) look at the vmstat system segment: your number of context switches is pretty high, in one case it was 47k.    High context switch values means you're just thrashing your cpu, and not getting much real work done.  My guess if you watch this when it climbs high, it tells you when your system is about to slow.&lt;BR /&gt;&lt;BR /&gt;5) Did you notice that when your max number of context switches were up, you also had a high number of blocks being read in?  Oracle I/O requests.&lt;BR /&gt;&lt;BR /&gt;Suggestion: Pay attention to that swappiness value as suggested by another posted answer, regarding all of the touchpoints on tuning Oracle on Redhat pdf document.  It's a good start.&lt;BR /&gt;&lt;BR /&gt;Suggestion: Set shmmax and shmmegs, etc such that you haven't cut up your Oracle space across too many segments, that is, less than 7.&lt;BR /&gt;&lt;BR /&gt;Suggestion: precache these data areas as much as possible before the bulk of users (maybe virtual users) get on.  If the test isn't relatable to real world, then the easiest thing to do, is just run a small number of test scenarios before running the actual test, this will load up your cache areas.  Also, consider putting cache hints on the tables being pulled in and disposed of often.  Ditto your code that's being dumped in and out, you should "pin" the oracle code that is experiencing being purged from the sga and then reloaded often.&lt;BR /&gt;&lt;BR /&gt;Suggestion: look at the size of your redo_buffer_cache.  This being small can very much affect your ability to be concurrent.  Test by doubling its size and see if it helps.&lt;BR /&gt;&lt;BR /&gt;Suggestion: check your scsi queue depth average (sar -d 1 1).  If the avg scsi queue depth is high, you'll need to do increase this parameter in the kernel (max_scsi_queue depth or similar name).&lt;BR /&gt;&lt;BR /&gt;Suggestion: Tables being called: are these writes?  If so, you need to review the value of the "inittrans" value.  If you've got a lot of writes going to these tables at the same time - you need to increase the value to the max value of the number of concurrent writes needed to the table at the same time.  If this is the beginning of the test scenario, it easily be just the logging of the sign-in logging table in your application.  Don't! forget to include in your analysis indexes used by these key concurrent queries! If all of these writes are by user_id and a primary key happen at the same time, then the index that supports this function needs a high initrans value as well.  The default for initrans in your database is usually 2... lack of this being higher can cause huge delays in your system.&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Apr 2011 15:50:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780224#M44222</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2011-04-22T15:50:29Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780225#M44223</link>
      <description>Thanks.&lt;BR /&gt;&lt;BR /&gt;I've actually gotten hold of the Oracle 10G guide for RedHat Linux but the Summit presentation is concise and direct.&lt;BR /&gt;&lt;BR /&gt;The server is 256GB RAM... HugePage size is 80GB. Total SGA for all 25 DBs is~ 280GB. And our DBAs are wondering why viaHP GlancePlus - we have ~ 130 GB free memory (wasted in file cache)&lt;BR /&gt;&lt;BR /&gt;So I recommended to bump up HugePages to 200 GB and adviced the DBAs to trim down SGA. Server was restarted and:&lt;BR /&gt;&lt;BR /&gt;From /prc/meminfo:&lt;BR /&gt;HugePages_Total: 102400&lt;BR /&gt;HugePages_Free:  66187&lt;BR /&gt;HugePages_Rsvd:  34271&lt;BR /&gt;Hugepagesize:     2048 kB&lt;BR /&gt;&lt;BR /&gt;Total Shared Mem Usage (SGAs, etc) is from ipcs -m:  ~162GB&lt;BR /&gt;&lt;BR /&gt;But meminfo does not seem to jive. HugePage_Free should be at ~38GB free or ~19000 pages free!&lt;BR /&gt;&lt;BR /&gt;And I still feel some strangeness with the system. Sluggish it seems. The DBs have not been hit by our apps folks since the changes/reboot but I am pretty sure they will still complain of severe degradtion in performance.&lt;BR /&gt;&lt;BR /&gt;Reading through the docs again, it seems we missed  the following:&lt;BR /&gt;&lt;BR /&gt;/etc/security/limits.conf:&lt;BR /&gt;oracle soft memlock &lt;VAL of="" hugepages=""&gt;&lt;BR /&gt;oracle hard memlock &lt;VAL of="" hugepages=""&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/sysctl.conf:&lt;BR /&gt;vm.hugetlb_shm_group=`id -g oracle`&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Would the above mater if missed with HugeMem Pages enabled?&lt;BR /&gt;&lt;BR /&gt;&lt;/VAL&gt;&lt;/VAL&gt;</description>
      <pubDate>Fri, 22 Apr 2011 16:38:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780225#M44223</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-22T16:38:57Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780226#M44224</link>
      <description>TP, Danke.&lt;BR /&gt;&lt;BR /&gt;Yeah CPU context switches were high and they're still high.. Averaging in the 50K.&lt;BR /&gt;&lt;BR /&gt;Thee were 31 Oracle SHM Segments out of 25 DB instances. SHMMAX was set at 27GB from an old config but we doubled it ~54GB (I know we should really be at 75 to 80 % of RAM).&lt;BR /&gt;&lt;BR /&gt;I'm still puzzled how SHM is gobbled up by SGA. Are you saying even with multiple DB instances - The ideal is to have all SGAs using just 1 SHM Segment? I thought it's 1 segment per DB instance? Need enlightenment here sir.&lt;BR /&gt;&lt;BR /&gt;Also just posted something on what we have vis a vis the Redhat Oracle Doc. What do you think?&lt;BR /&gt;</description>
      <pubDate>Fri, 22 Apr 2011 16:43:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780226#M44224</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-22T16:43:30Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780227#M44225</link>
      <description>&lt;!--!*#--&gt;We had very similar problems on a 32-way x86_64 RHEL 5.5 server with 128 GB of RAM.  The database was an Oracle 11g database but we found these things to work with 10g also.&lt;BR /&gt;&lt;BR /&gt;This is what we did to turn performance from poor to better than ever.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Set SGA_Target and SGA_Max to the same value for each database&lt;BR /&gt;&lt;BR /&gt;We started with the total of the SGA at 50% of total memory on the server but found that it could be increased past that&lt;BR /&gt;- with 256 GB of RAM, you can probably go up to 75% but you might want to start with 50%&lt;BR /&gt;&lt;BR /&gt;Set HugePages to 50% of total memory on the server to start&lt;BR /&gt;&lt;BR /&gt;Monitor HugePages Free&lt;BR /&gt;- if it goes to zero, increase it&lt;BR /&gt;- if there is always a large number of HugePages Free, you can reduce it&lt;BR /&gt;&lt;BR /&gt;Do anything you can to prevent swapping. That totally destroyed our database performance.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I also suggest opening a ticket with Oracle Support.  Oracle has learned a lot about running databases on Linux servers with a lot of memory over the last couple years.&lt;BR /&gt;</description>
      <pubDate>Sat, 23 Apr 2011 12:51:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780227#M44225</guid>
      <dc:creator>Michael Ehrman</dc:creator>
      <dc:date>2011-04-23T12:51:16Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780228#M44226</link>
      <description>If I did not have the following set, but all of the above are, will I have issues?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/security/limits.conf:&lt;BR /&gt;oracle soft memlock &lt;VAL of="" hugepages=""&gt;&lt;BR /&gt;oracle hard memlock &lt;VAL of="" hugepages=""&gt;&lt;BR /&gt;&lt;BR /&gt;/etc/sysctl.conf:&lt;BR /&gt;vm.hugetlb_shm_group=`id -g oracle`&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;We seem to hae missed setting these values as recommended from the Redhat Oracle Perf Tuning Guide.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks!&lt;/VAL&gt;&lt;/VAL&gt;</description>
      <pubDate>Sat, 23 Apr 2011 17:24:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780228#M44226</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-23T17:24:24Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780229#M44227</link>
      <description>So anyone on my previous post?&lt;BR /&gt;&lt;BR /&gt;This morning, we've had another episode on one of our large Prod Servers wherein it just went down hill, slowed down and hung that we just resorted to the 2 finger salute.&lt;BR /&gt;&lt;BR /&gt;load was very high -- reached 400 (system is a 128GB RAM, 24-way RHEL 5.6, 180GB swap)&lt;BR /&gt;Kjournald and kswapd went berserk it seems that that system CPU time was &amp;gt;90%&lt;BR /&gt;&lt;BR /&gt;HugeMem is set as well on this server except for the above limit.conf and sysctl.conf settings.&lt;BR /&gt;&lt;BR /&gt;And WEIRD of all, when the server was stil responsive.. I managed to sneak in cheking HugeMem Usage and got:&lt;BR /&gt;&lt;BR /&gt;# grep -i huge /proc/meminfo HugePages_Total: 21570 &lt;BR /&gt;HugePages_Free:  21570 &lt;BR /&gt;HugePages_Rsvd:      0 &lt;BR /&gt;Hugepagesize:     2048 kB&lt;BR /&gt;&lt;BR /&gt;DB instance (lone) has arounf 43GB of SGA allocated.&lt;BR /&gt;&lt;BR /&gt;Wondering why all of a sudden Oracle or the system decided the instance thrown out at using HugeMem!  ipcs showed the 43GB of SHM...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pretty strange.&lt;BR /&gt;&lt;BR /&gt;Anyone care to comment? We still have RHEL support analysing these information as well as HangWatch stats and a partial vmcore.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;TIA!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2011 15:28:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780229#M44227</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2011-04-25T15:28:16Z</dc:date>
    </item>
    <item>
      <title>Re: Perf T'shooting - Large Linux DB Farm</title>
      <link>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780230#M44228</link>
      <description>Greetings Alzhy,&lt;BR /&gt;&lt;BR /&gt;1) re: shared mem. 1 per database instance is optimal, no need to fool with it any further, it's as good as good gets.&lt;BR /&gt;&lt;BR /&gt;2) I'll bet when your system goes down, your context switches are absolutely off the chart.  If so, you're running too much at once, and have become cpu bound.  Ways to fix 2a) more cpu available, 2b) less cpu consumption.&lt;BR /&gt;along the lines of b) - your dbas really, really need to run statspack and determine what are the large, large items queries  killing your system either in number of times executed, or total cpu consumed.&lt;BR /&gt;Really, tuning in this is generally how one must proceed, and a good DBA will find out what is consuming his systems, and will generate plans to tune. If not, hire another consultant DBA... if you're stuck I can give you a name, but you'll have put an email address out there somewhere I can exchange information with regarding the company as we are not allowed to do that in these forums.  I've used him when stuck, and he is amazing.  &lt;BR /&gt;&lt;BR /&gt;Re: Shared memory consumption by the SGA.  The big buffer_cache, code areas, etc, are all stored in shared memory segments so that all of the running pieces of the Oracle database can get at the structure simultaneously.  It's how my query, your query, db processes, web queries, etc can happen all at once, they all have access to the SGA components shared in the shared memory area along with the systems processes that need to handle access/locking/maintenance, etc.  So, shared memory is for sharing the database... &lt;BR /&gt;&lt;BR /&gt;If you watch this system when performance goes down, do you see a) network traffic rates drop, b) disk i/O drop while context switching increases and goes off the charts?&lt;BR /&gt;If so, you're cpu bound by the total number of processes you're running, and the cpus are spending more time switching processess in and out of execution, than they are actually executing code.  This is common problem when the upper echelons of performance and concurrency become an issue.&lt;BR /&gt;&lt;BR /&gt;Remember... way back when... when you and I discussed the advantages/disadvantages of BigIron cpu design vs Intel general power house cpu systems ??? I told you about how, in general, the Itanium, IBM Power, PA-Risc, etc systems tend to handle loads with less decay as concurrency loads increase.  At the same time, Intel processors handle loads much, much faster but experience a much larger and significant decay in throughput as concurrent loads increase.  Well, this is probably part of what you're seeing.  Intel systems start off 3 or 4 times faster than Itanium, but experience HUGE dropoffs in performance as the number of concurrent processes increase into the number of processess you're currently experiencing, while Itanium, Power, PA-Risc etc, would not experience huge throughput degradations from concurrency, but never as fast to begin with.  From the drawings in the past that I've seen, the cross over point was at about 500 concurrent processes or so, at this point, Itanium chips show their stuff by handling the larger loads with much, much less decay.  Kind of like the difference between trying to pull a vehicle stuck in the mud with a tractor vs your personal passenger truck.  Truck is faster, tractor can pull bigger loads. Useful metaphor anyways.&lt;BR /&gt;&lt;BR /&gt;You need more cpu, or you need to consume less cpu...&lt;BR /&gt;&lt;BR /&gt;Good Luck!</description>
      <pubDate>Thu, 28 Apr 2011 20:58:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/perf-t-shooting-large-linux-db-farm/m-p/4780230#M44228</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2011-04-28T20:58:38Z</dc:date>
    </item>
  </channel>
</rss>

