Operating System - HP-UX
1825795 Members
2181 Online
109687 Solutions
New Discussion

Problems creating virtual HBAs with NPIV

 
Toscano
Occasional Advisor

Problems creating virtual HBAs with NPIV

Hi all experts,

I have a big problem, if someone can help me it would be of great help.

I'm using HP-UX 11.31
# /usr/sbin/swlist | grep -i HPUX11i-VSE-OE
HPUX11i-VSE-OE B.11.31.1403 HP-UX Virtual Server Operating Environment

I successfully created a vPar on an HP-UX 11.31 server, I use the following command vparcreate
# vparcreate -p vpar-naos -a cpu::7 -a mem::40960

I can start the vpar from the VPS
# hpvmstart -P vpar-naos
(C) Copyright 2000 - 2015 Hewlett-Packard Development Company, L.P.
Initializing System Event Log
Initializing Forward Progress Log
Mapping vPar/VM memory: 40960MB
mapping RAM (0-a00000000, 40960MB)
....
/opt/hpvm/lbin/hpvmapp (/var/opt/hpvm/uuids/c3afd45c-5069-11eb-b50e-d8d385f879c9/vmm_config.current): Allocated 42949672960 bytes at 0x6000000100000000
locking memory(lowmem 0x6000000100000000): 0-a00000000
allocating overhead RAM (overhead_mem_mbase)6000000b00000000-6000000b0c000000, 192MB)
locking memory: 6000000b00000000-6000000b0c000000
allocating datalogger memory: FF800000-FF900000 (1024KB) ramBaseLog 6000000b0bf00000
allocating firmware RAM (fff00000-100000000, 1024KB) ramBaseFw 6000000b0be00000
locked SAL RAM: 00000000fff00000 (8KB)
locked ESI RAM: 00000000fff02000 (8KB)
locked PAL RAM: 00000000fff04000 (8KB)
locked Min Save State: 00000000fff0a000 (4KB)
locked datalogger: 00000000ff800000 (1024KB)
Creation of VM minor device 2
Device file = /var/opt/hpvm/uuids/c3afd45c-5069-11eb-b50e-d8d385f879c9/vm_dev
Overhead startHPA 28f4000000 size c000000 num_ranges 1
Loading boot image
Image initial IP=102000 GP=69E000
No NVRAM persistent variables on disk
Starting event polling thread
guestStatsStartThread: Started guestStatsCollectLoop - thread = 6
Starting thread initialization
No NVRAM persistent variables on disk
Daemonizing....
hpvmstart: Successful start initiation of vPar or VM 'vpar-naos'

The VSP has 4 Fiber Channel cards installed with support for NPIV
# /usr/sbin/ioscan -fnC fc
Class I H/W Path Driver S/W State H/W Type Description
======================================================================
fc 0 0/0/0/3/0/0/0 fclp CLAIMED INTERFACE HP AH403A 8Gb PCIe 2-port Fibre Channel Adapter
/dev/fclp0
fc 1 0/0/0/3/0/0/1 fclp CLAIMED INTERFACE HP AH403A 8Gb PCIe 2-port Fibre Channel Adapter
/dev/fclp1
fc 2 0/0/0/9/0/0/0 fclp CLAIMED INTERFACE HP AH403A 8Gb PCIe 2-port Fibre Channel Adapter
/dev/fclp2
fc 3 0/0/0/9/0/0/1 fclp CLAIMED INTERFACE HP AH403A 8Gb PCIe 2-port Fibre Channel Adapter
/dev/fclp3

Additionally there is another vPar that was created 2 years ago (vpar-antares) and it´s in operation without any inconvenience.
# vparstatus
[Virtual Partition]
Num Name RunState State
=== ========================== ============ =========
1 vpar-antares UP Active
3 vpar-naos EFI Active

[Virtual Partition Resource Summary]
Virtual Partition CPU Num Num Total MB Floating MB
Num Name Min/Max CPUs IO Memory Memory
=== ========================== ======= ==== ==== ========= ============
1 vpar-antares 1/512 7 7 81920 0
3 vpar-naos 1/512 7 0 40960 0


My problem is that when creating the virtual hbas of the vpar-naos, the process fails because it does not connect to the host
# vparstatus -Av | grep hba
hba:avio_stor:,,,:npiv:/dev/fclp0
hba:avio_stor:,,,:npiv:/dev/fclp1
hba:avio_stor:,,,:npiv:/dev/fclp2
hba:avio_stor:,,,:npiv:/dev/fclp3
#
# vparmodify -p vpar-naos -a hba:avio_stor::npiv:/dev/fclp0
Connect to host taurus failed.
vparmodify: ERROR (host): GUID service error: Could not connect to GUID server.
vparmodify: ERROR (vpar-naos): Could not retrieve WWNs from a GUID server.
vparmodify: Unable to create device hba:avio_stor::npiv:/dev/fclp0.
vparmodify: Unable to modify the vPar.


Validating from the VPS I can see a failure in the connection to the GUIDMGR database
# /opt/guid/bin/guidconfig -l
HOST=taurus
BE_LIBS=wwn
#
# /opt/guid/bin/guidmgmt -L wwn
Connect to host taurus failed.
listGUIDrange() failed. error: Could not connect to GUID server

 

I ran the procedure to initialize the GUID server described in the GUID Manager Administrator Guide (/opt/guid/utils/guid_server_prepare.sh), with the results in the attached file and when I check the log file, I see the following message:
# cat /var/opt/guid/logs/initdb.log
initdb: directory "/var/opt/guid/db" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/opt/guid/db" or run initdb
with an argument other than "/var/opt/guid/db".
The files belonging to this database system will be owned by user "guiddb".
This user must also own the server process.

The database cluster will be initialized with locale C.
The default database encoding has accordingly been set to SQL_ASCII.
The default text search configuration will be set to "english".

Please, could someone help me to solve this problem and create the virtual WWNs for the vpar-naos ???

Thanks a lot

Spoiler
Spoiler
 

Additional data:
# /usr/sbin/swlist | grep GUIDMGR
GUIDMGR A.01.00.603 HP-UX GUID Manager
#
# /usr/sbin/swlist | grep vPars
BB068AA B.06.30 HP-UX vPars & Integrity VM v6

3 REPLIES 3

Re: Problems creating virtual HBAs with NPIV

Your symptom seems to be that you cannot *connect* to your GUID manager, not that it isn't inistialised and working... do you have rpcbind running? See this support article:

https://support.hpe.com/hpesc/public/docDisplay?docId=kc0125044en_us&docLocale=en_US

 


I am an HPE Employee
Accept or Kudo
Toscano
Occasional Advisor

Re: Problems creating virtual HBAs with NPIV

Thanks Duncan

Here the verification of the rpcbind
# ps -eaf | grep -i -e RPC
root 61 0 0 Feb 12 ? 0:00 krpckd
root 3087 1 0 Feb 12 ? 0:00 /usr/sbin/rpcbind
daemon 3159 1 0 Feb 12 ? 0:00 /usr/sbin/rpc.statd
root 3165 1 0 Feb 12 ? 0:00 /usr/sbin/rpc.lockd
root 4033 1 0 Feb 12 ? 6:21 /opt/dce/sbin/rpcd
root 7458 7442 1 21:58:33 pts/1 0:00 grep -i -e RPC

 

Another thing that maybe is important, the VSP has 699 days in operation and I wonder, what impact can it have for the processes that are already running if I restart the rpcbind service.

What is the better form to do a rpcbind restart?
# uptime
10:03pm up 699 days, 10:50, 3 users, load average: 0.00, 0.00, 0.00

 

 

Re: Problems creating virtual HBAs with NPIV

As detailed in the link I sent, the /sbin/init.d/nfs.core script, so:

/sbin/init.d/nfs.core stop
/sbin/init.d/nfs.core start

should do what you require. I assume as this is a VSP you have no NFS services (client or server) operating from this system?


I am an HPE Employee
Accept or Kudo