- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Ignite Error
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 09:09 AM
12-14-2009 09:09 AM
I am running hp-ux 11.11 in a rp8420 model, in this the Ignite-UX Revision is C.7.9.261. Now I am able to run a successfull ignite backup using make_tape_recovery command without any option. But I am having a customized ignite script for this server which was working fine earlier. If I run this script to start my ignite backup fails with a core file generated. Following is the core file output.
Hostname# file /usr/local/recovery/core
/usr/local/recovery/core: core file from 'list_expander' - received SIGSEGV
Hostname# what /usr/local/recovery/core
/usr/local/recovery/core:
HP-UX libm shared PA1.1 C Math Library 20000331 (201031) UX11.01
HP-UX libisamstub.sl 19991217 (135120) B3907DB/B3909DB B.11.01.11
HP Port of Compaq Convert RTL V0.0.00
HP Fortran of Alpha RT V0.0.00
Intel Fortran RTL V1.1-929 1+ 1-Aug-2003
fs_amod.s $Revision: 1.9.1.1 $
libcl.sl version B.11.XX.21 - Aug 4 2008
$ PATCH_11.11/PHCO_37369 Jan 11 2008 03:26:43 $
SMART_BIND
92453-07 dld dld dld.sl B.11.67 081208
Hostname#
The following is the recovery log error.
ERROR: A problem has been detected in determining which volume groups and/or
disks are to be part of the recovery archive. This could be due to a
failure of the "list_expander" command. Ensure that the
"/opt/ignite/lbin/list_expander" command returns valid output and
retry.
Please help me in this.
Thanks in advance
Vasanth
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 09:14 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 09:27 AM
12-14-2009 09:27 AM
Re: Ignite Error
/opt/ignite/bin/make_tape_recovery -I $* -P w -d /dev/rmt/0mn -v \
-x inc_entire=$ROOTVG \
-x inc_entire=vgnbu \
-x inc_entire=vg01 \
-x include=/ \
-x include=/stand \
-x include=/usr/openv \
-x include=/home \
-x exclude=/usr/sap/put \
-x exclude=/usr/sap/trans \
-x exclude=/usr/sap/P22/DVEBMGS03 \
-x exclude=/export \
-x exclude=/oracle \
-x exclude=/work_tmp \
-x exclude=/usr/openv/logs \
-x exclude=/usr/openv/netbackup/logs \
-x exclude=/var/log/VRTSpbx \
-x exclude=/usr/openv/volmgr/debug \
-x exclude=/var/adm/crash \
-x exclude=/var/adm/sw/save \
-x exclude=/var/tmp \
-x exclude=/tmp \
-x exclude=/tmp_mnt \
-x exclude=/cdrom \
-x exclude=/SD_CDROM \
-x exclude=core
Thanks
Vasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 09:37 AM
12-14-2009 09:37 AM
Re: Ignite Error
>>/opt/ignite/bin/make_tape_recovery -I $* -P w -d /dev/rmt/0mn -v \
The '$*' will insert whatever options you supply. Is it possible there are some invalid options?
Also, some of your includes are redundant. If you are including the entire root VG, there really isn't much reason to also include '/' and '/stand' separately.
Is there anything interesting in the /var/opt/ignite/recovery/latest/recovery.log file?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 09:52 AM
12-14-2009 09:52 AM
Re: Ignite Error
I can't find anything other than the error as said as above..
Thanks
Vasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 10:12 AM
12-14-2009 10:12 AM
Re: Ignite Error
Also i am getting the following errors frequently.
errors from the I/O subsystem. 10 I/O error entries were lost
Is this helpful to figure out the isssue?
Thanks
Vasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 10:36 AM
12-14-2009 10:36 AM
Re: Ignite Error
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 10:58 AM
12-14-2009 10:58 AM
Re: Ignite Error
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 12:31 PM
12-14-2009 12:31 PM
Re: Ignite Error
grep root /etc/passwd
bdf
dmesg
Please post.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 12:49 PM
12-14-2009 12:49 PM
Re: Ignite Error
dmesg output
dmesg
Dec 14 15:42
...
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 321 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 322 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 323 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 324 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 325 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 326 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 327 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 328 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/2/1/1: Bus Instance number 329 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 330 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 331 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 332 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 333 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 334 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 335 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 336 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 337 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 338 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 339 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 340 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 341 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 342 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 343 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 344 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 345 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/6/1/0: Bus Instance number 346 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x101040400 name=fcd_vbus
1/0/10/1/1: Bus Instance number 349 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x11b90bc00 name=fcd_vbus
1/0/10/1/1: Bus Instance number 350 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x11b90bc00 name=fcd_vbus
0/0/8/1/1: Bus Instance number 290 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
0/0/8/1/1: Bus Instance number 291 exceeded the maximum allowed instance number.
wsio_claim init failed isc=0x100649400 name=fcd_vbus
root:*:0:3::/:/sbin/sh <<-- Passwd
Filesystem kbytes used avail %used Mounted on
/dev/vg00/root 10485760 8025188 2384378 77% /
/dev/vg00/boot 255253 42963 186764 19% /stand
/dev/vg01/lvhome 4194304 2007099 2051118 49% /home
/dev/vgnbu/lvnbu 14221312 3882888 10258024 27% /usr/openv
/dev/vgsap00/lvsapP22
8388608 575631 7324710 7% /usr/sap/P22/DVEBMGS03
/dev/vgsap00/lvsapmnt
8388608 3866618 4449608 46% /export/sapmnt/P22
/dev/vgsap00/lvsaptrans
43573248 34552980 8918380 79% /export/usr/sap/trans
/dev/vgsap00/lvdbhome
26214400 12810299 12571201 50% /oracle
/dev/vgsap02/lvdblogA
3145728 3073919 67324 98% /oracle/P22/origlogA
/dev/vgsap02/lvdblogB
3145728 3073919 67324 98% /oracle/P22/origlogB
/dev/vgsap02/lvdbmirA
3145728 3073919 67324 98% /oracle/P22/mirrlogA
/dev/vgsap02/lvdbmirB
3145728 3073919 67324 98% /oracle/P22/mirrlogB
/dev/vgsap02/lvsapdata1
284459008 278456440 5955744 98% /oracle/P22/sapdata1
/dev/vgsap02/lvsapdata2
284459008 280806440 3624152 99% /oracle/P22/sapdata2
/dev/vgsap02/lvsapdata3
284459008 280900984 3530296 99% /oracle/P22/sapdata3
/dev/vgsap02/lvsapdata4
284459008 279923592 4500064 98% /oracle/P22/sapdata4
/dev/vgsap02/lvsapdata5
284459008 282491784 1951920 99% /oracle/P22/sapdata5
/dev/vgsap02/lvsapdata6
284459008 281247008 3186984 99% /oracle/P22/sapdata6
/dev/vgsap02/lvsapdata7
284459008 274779296 9604160 97% /oracle/P22/sapdata7
/dev/vgsap02/lvsapdata8
284459008 279928808 4494824 98% /oracle/P22/sapdata8
/dev/vgsap02/lvsapdata9
213344256 212685704 653432 100% /oracle/P22/sapdata9
/dev/vgsap02/lvsapdata10
284459008 282776504 1669368 99% /oracle/P22/sapdata10
/dev/vgsap02/lvsapdata11
284459008 278834264 5580856 98% /oracle/P22/sapdata11
/dev/vgsap02/lvsapdata12
284459008 278924392 5491440 98% /oracle/P22/sapdata12
/dev/vgsap02/lvsapdata13
284459008 268870984 15466296 95% /oracle/P22/sapdata13
/dev/vgsap02/lvsapdata14
284459008 256165120 28072904 90% /oracle/P22/sapdata14
/dev/vgsap02/lvsapdata15
284459008 52491096 230155672 19% /oracle/P22/sapdata15
/dev/vgsap02/lvsapdata99
71114752 46298600 24622336 65% /oracle/P22/sapdata99
/dev/vgsap02/lvsaparch
56852480 5648696 50804368 10% /oracle/P22/saparch
/dev/vgsap00/lvsapreorg
52428800 4466240 47588288 9% /oracle/P22/sapreorg
/dev/vgsap00/lvsapput
20971520 7353329 12767057 37% /usr/sap/put
/dev/vgsap00/lvdbstage
8388608 5773639 2451590 70% /oracle/stage
/dev/vgsap02/lvworktmp
20971520 967216 19848064 5% /work_tmp
/dev/vgixos/ixosarchive
71118848 2469944 68112720 3% /export/ixos_exp/P22/archive
dcsgsap1:/export/sapmnt/P22/exe
8388608 3866624 4449608 46% /sapmnt/P22/exe
dcsgsap1:/export/sapmnt/P22/global
8388608 3866624 4449608 46% /sapmnt/P22/global
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 04:24 PM
12-14-2009 04:24 PM
Re: Ignite Error
make_tape_recovery command without any option is working maybe because related devices of VG00 are working normal. With that being said, the problem is associated with non-vg00 groups. You may want to check any issue with other VGs and also check h/w components using ioscan -fn and cstm log tool.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2009 08:43 PM
12-14-2009 08:43 PM
Re: Ignite Error
This is typically useless on a corefile. You should use gdb to get a stack trace:
gdb /opt/ignite/lbin/list_expander /usr/local/recovery/core
(gdb) bt
(gdb) info reg
(gdb) disas $pc-4*20 $pc+4*4
(gdb) q
>Sameer: The list_expander is failing because of receipt of SIGSEGV signal which might be from the I/O subsystem because of ongoing excessive I/O errors.
Yes, they could be related but signal 11s occur due to sloppy programming.