HPE 9000 and HPE e3000 Servers
1752681 Members
5682 Online
108789 Solutions
New Discussion юеВ

Re: Partitions configuration doubts

 
SOLVED
Go to solution
Ninad_1
Honored Contributor

Partitions configuration doubts

Hi,

I am not very familiar with nPars and vPars though I have been reading some docs for a while. I have a few doubts which I need help on.
Say I have a rp7420 with 2 cells
Cell#1 8 CPUs (PA8900), 32 GB RAM
Cell#2 8 CPUs (PA8900), 32 GB RAM
The PCI-X IO cards as follows
PCI IO Chasis 0 -
Slot 1 - Core IO in slot
Slot 2 - 2 port Ultra 320,2 port GE card
Slot 3 - 2 port Ultra 320,2 port GE card
Slot 4 - 2 port FC (HBA), 2 port GE card
Slot 5 - 2 port FC (HBA), 2 port GE card
Slot 6 - 2 port Ultra 320,2 port GE card
Slot 7 - 2 port Ultra 320,2 port GE card
Slot 8 - 2 port FC (HBA), 2 port GE card
PCI IO Chasis 1 -
Slot 1 - 2 port FC (HBA), 2 port GE card
Slot 2 - 2 port Ultra 320,2 port GE card
Slot 3 - 2 port Ultra 320,2 port GE card
Slot 4 - 2 port FC (HBA), 2 port GE card
Slot 5 - 2 port FC (HBA), 2 port GE card
Slot 6 - 2 port Ultra 320,2 port GE card
Slot 7 - 2 port FC (HBA), 2 port GE card
Slot 8 - Free

Suppose I configure a single nPar with both the cells and above config. My question is can I have the following vPars configuration

vPar#1 - 8 CPU, 32 GB RAM, Using following cards
PCI chassis 0 - Cards in Slot 2,3,4,5

vPar#2 - 4 CPU, 16 GB RAM, using following cards
PCI Chasis 0 - Cards in Slot 6,7,8
PCI Chasis 1 - Card in Slot 1

vPar#3 - 2 CPU, 8 GB RAM, using following cards
PCI Chasis 1 - Cards in Slot 2,3,4,5

vPar#4 - 2 CPU, 8 GB RAM, using following cards
PCI Chasis 1 - Cards in Slot 6,7


My doubt is that vPar#1 is using CPU, memory and IOs from cell#0 and PCI chasis 0 and vPar#3 and 4 using resources from cell1 and PCI IO chasis 1. But vPar#2 is using CPU and memory from cell1 and using IO cards from PCI chasis 0 as well as chasis1 -
Is this a valid configuration,
Can I design such a system configuration ?

Please clarify my doubts and any tips/caveats.

Thanks a lot,
Ninad
14 REPLIES 14
Solution

Re: Partitions configuration doubts

Ninad,

vPars doesn't care what cell an IO card is part of, so this is a valid config, although I'm not sure whether you might need to make the core IO card part of one of the vPars.

As long as a vPar has CPU, memory and access to PCI card(s) with a storage port and network port then your fine.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Ninad_1
Honored Contributor

Re: Partitions configuration doubts

Thanks for the help.
Is it possible for you to let me know whether I need to have the Core IO in any vPar ?
Also do I need the Ultra SCSI cards for boots disks or I can use SAN boot disks for nPar and all vPars in it ?
Also which storage model would you suggest for booting the 4 vPars mentioned above if I wish to boot through directly attached SCSI drives ? Can this storage array suffice the redundant connection as well for all the 4 vPars ?
Any guidance regarding this is most required.

Thanks again,
Ninad
Mridul Shrivastava
Honored Contributor

Re: Partitions configuration doubts

Core I/o is not required in vpar but it must have a SCSI card as each vpar requires a disk. Moreover vpar we need to assign LBA not SBA so if you have one scsi card for each vpar it will solve ur purpose.
Time has a wonderful way of weeding out the trivial
Ninad_1
Honored Contributor

Re: Partitions configuration doubts

Yes I understand that I need to assign LBA (which will correspond to a card in a PCI slot) to a vPar.
Cant I use SAN boot for the vPars and nPar ?
Do I need to have a SCSI ?
I have seen quite a few threads discussing the SAN boot and SCSI boot options, but still not too sure on what should be the best practise ?
Also if I can use SAN boot, what will be the Core IO card required for, I am unable to understand.

Also I would like to know from the Gurus here - that if I have some vPars as production servers and some as development, then is it a good practise to have them as part of the same nPar, because the root user on development server will have privileges on the nPar and production vPars as well, so it seems a bit dangerous.
What are standard practices followed in your environments ?
But if I divide the development and production into seperate nPars, I wont be able to use the free PCI slots in production nPar and they will go waste.
Please suggest good practices and also please answer my above questions.

Thanks,
Ninad
Torsten.
Acclaimed Contributor

Re: Partitions configuration doubts

Ninad,

as long as the HBA and the array is supported as a boot device, booting from SAN is no problem.
Consider to devide production and development systems for more stability. If you are root on a vpar, you are able to control the other vpars as well (reboot, start, stop...). You may use different npars for this, but this means a maximum of 8 CPUs per npar in your case.
I would configure the first vPar to boot from the internal disks, any other to boot from external devices (SAN).

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Ninad_1
Honored Contributor

Re: Partitions configuration doubts

Torsten,

Thanks. Ya I understand that the root user will have administrative privileges on prodution vPars as well, but then we are loosing the PCI IO capability and hence will not be able to accomodate more vPars which would be possible CPU and memory wise but not possible due to inadequate PCI slots.
So I wish to understand how you guys have in your environments?

Thanks,
Ninad
Ninad_1
Honored Contributor

Re: Partitions configuration doubts

Also another thing I forgot - Torsten, you said I can use internal disks for one vPar, but then wont the Core IO card be a single point of failure for that vPar ?
How do you guys configure in your environments ?

Thanks,
Ninad
Torsten.
Acclaimed Contributor

Re: Partitions configuration doubts

If you build a vpar with only one multifunction card, this card will always be a SPOF, even you have dual connectors on this card (e.g. 2 LAN interfaces - 1 card - 1 slot => the slot can fail).

Regarding the internal disks:
The upper HDDs are using a seperate SCSI controller each, the lower are sharing one bus.

Since you have all 4 internal disks in only 1 vpar, you can use 1 of the upper and 1 of the lower as a mirroring pair (SCSI controllers used for the left 2 HDDs are on the 2 MP cards, 1 one each card).

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Ninad_1
Honored Contributor

Re: Partitions configuration doubts

Sorry for not being to able to understand fully.
Do you mean that you are suggesting to have 2 Core IO cards, then use 4 internal disks for the vPar and that vPar should use both the Core IO cards ? I thought I read that if a single nPar is configured only one Core IO card is active and the other comes into picture only if the 1st card fails. So can we use disks on the other Core IO card as mirrored boot disk ?
The other thing on having a single multi function card - If you see the configuration I have thought of - it consists 2 multifunction cards - so as to mitigate any single card failures - if that what you mean. Only the last vPar - vPar#4 has single instances of multi function cards - as there are not enough slots to plug in th cards, and now if you say I need 2 Core IO the only free slot will also be sacrificed.

Please do clarify my doubts.

Thanks a lot,
Ninad