- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Collapsing RX8620 nPartitions
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 08:00 AM
03-24-2009 08:00 AM
I've been asked to investigate the impact of collapsing 3 of the 4 nPartitions we currently have running within our HP RX8620.
One of these hosts is alrady pushing I/O quite heavily on occasion and my question is, will collapsing 2 additional hosts and merging the I/O through one server create a further bottleneck ?
Currently each server is dual attached through fibre channel cards to the EVA 5000 using securepath for multipath software. If we were to also move the existing fibre channel cards into this new merged server (thus increasing the number of active paths for "round robin") would we expect to be able to handle more IO as a result ?
Thanks in advance (points as usual).
G.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 08:06 AM
03-24-2009 08:06 AM
Re: Collapsing RX8620 nPartitions
You will have more memory, CPUs and I/O then, this really should increase the overall performance.
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 08:15 AM
03-24-2009 08:15 AM
Re: Collapsing RX8620 nPartitions
I just want to understand how we should handle the potential IO bottle neck.
By simply collapsing the npartitions, we would just move more IO down the same 2 active paths (on an already IO contentious system)
If we were to add more active fibre paths by utilising the existing fibre ports (on the merged partitions) would we increase IO performance? I know securepath round robin's round active paths but is this enough and would we potential create bottlenecks elsewhere (i.e. on the server)
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 08:21 AM
03-24-2009 08:21 AM
Solution"Fourteen of sixteen I/O card slots are supported by dual high-performance links. Each link is capable of providing 530 MB/s of bandwidth. This means that most HP Integrity rx8620 Server I/O slots are capable of sustained 1.06 GB/s. Aggregate I/O slot bandwidth is 15.9 GB/s. In addition, because each I/O slot has a dedicated bus, any slot can be "hot-plugged" or serviced without affecting other slots. The hot-plug operation is very easy, and can be done with minimal training and effort. "
http://h18000.www1.hp.com/products/quickspecs/11849_div/11849_div.HTML
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 08:25 AM
03-24-2009 08:25 AM
Re: Collapsing RX8620 nPartitions
Not sure whether you are moving to 1 nPar or 2. But either way you are compiling more of your resources together.
Are the other nPars busy or under-utilised? You might have spare capacity, be it CPU, memory, or I/O that applications on the original busy nPar can now take advantage off. Which would improve overall performance.
How busy is the EVA and your SAN, can they be pushed harder? You say you are I/O bound and want to add more FC HBAs to gain more bandwidth, but are you sure this is not due to the SAN or the EVA (e.g. small disk group)?
Another consideration is your LVM configuration - maybe this is not optimal.
Obviously you will be losing electrical hardware isolation between the different nPars and their applications - is this something that will fit well with how the nPars are currently deployed?
HTH, Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2009 09:02 AM
03-24-2009 09:02 AM
Re: Collapsing RX8620 nPartitions
We would be collapsing 4 partitions (currently 4 servers) into 2 servers (a 1 & 3 partition split)
As you guessed the driver for this is because one of the hosts is being over utilised while others are remaining quiet.
We have 2gb Fibre Channel cards feeding into 2 cisco based farics with an EVA5000 in the bankend. Disk Group numbers aren't huge but there wouldn't be any change in the backend so wouldn't expect any difference (good or bad) there.
As long as adding more paths into secure path provides more bandwidth and increases IO Performance then I will probably consider this.
Thanks for your help.