Servers: The Right Compute
Showing results for 
Search instead for 
Did you mean: 

HP-UX Virtual Partitions (vPar) – Partitioning a server softly


As children we were taught that sharing is a good thing. The modern day hypervisors, conceived in the late 1990s/early 2000 were also created using the same basic assumption. The intent was to be able to run many unmodified guest operating systems simultaneously on the same hardware, and performance was not a key metric.


In that early period of hypervisor development, the cool technologies were the binary translators that would look for patterns in code that were not safe to run directly on the hardware, and patch them to “virtually safe” code. Around 2005, the hardware vendors started getting into the virtualization game, with Intel (VT-x for x86 and VT-i for Itanium) and AMD (AMD-V) coming up with hardware extensions to support virtualization. These were primarily aimed at making things easier for the hypervisor by introducing intelligence in the processor around guest vs. hypervisor execution.


The Hardware Revolution

The hardware support evolved from this CPU virtualization phase to the memory virtualization one, with technologies like EPT (Extended Page Tables from Intel) and NPT (Nested Page Tables from AMD, later renamed as RVI or Rapid Virtualization Indexing) which allowed the hypervisor to quietly slip out of the memory management path of guests, after the initial setup before the guests were launched. Around the same time came the realization that one of the major barriers to performance in virtualized environments lay in the I/O subsystems. This lead to technologies such as VT-d and VMDq (Virtual Machine Device Queues) from Intel, proposing multiple independent queues on a network adapter. These hardware queues could be independently assigned to guests, and the adapter could do DMA directly to guest memory space without the need for memory copies initiated by the hypervisor.


Virtual Partitions – Performance at its core

With all of that behind us, we are now entering into an era where research is concentrated on the fact that the delimiter to guest performance lies in the ability of a hypervisor to schedule a guest with as less latency as possible, especially when an IO packet arrives. However, this era also coincides with the growth of multi-core and multi-thread processors. What this means for the hypervisor technology is that instead of running 5 guests per core on a dual-core server, you could run maybe 8 guests on a single socket 8-core Intel Itanium Poulson processor based server, thus dedicating a core to a guest. Enter HP-UX Virtual Partitions.


HP-UX Virtual Partition technology is not new. The first product was released back in 2001 on HP’s PA-RISC servers, and since then multiple versions have been released supporting IPF servers as well, including the HP Superdome and Superdome2. At its core, the technology is very simple – assign dedicated cores, memory and IO to a server. The granularity of resource assignment to a partition is a core for CPU resources, a granule for memory (typically 64 MB) and a I/O device.


The lines are blurring…

The version 6 of HP-UX vPars is the latest generation of this technology released in 2011, and it has some exciting features. It brings together vPars and HP Integrity VM (a hypervisor based technology) in a single product with the ability to mix what are known as shared guests (sub-cpu resources, memory sharing, IO sharing) with the new generation vPars that have dedicated cores and memory assigned to them. The new version of vPars has the advantage over the earlier generation vPar technology of allowing IO devices to be shared between guests. Even though a vPar is assigned dedicated resources, it is not completely static, and allows the addition or deletion of both CPU and memory resources.


The partitioning vs. sharing debate is by no means over, but the HP-UX VM/vPars v6 product is a step towards maybe not having that debate at all, and fitting the workload to the technology.

0 Kudos
About the Author