StoreVirtual Storage
1755406 Members
3098 Online
108832 Solutions
New Discussion юеВ

Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?

 
SOLVED
Go to solution
Robert_Honore
Occasional Advisor

Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?

Is there a troubleshooting guide for the HPE StoreVirtual VSA?

I'm currently trying to restore to service a 3-node StoreVirtual VSA cluster (3 mangement nodes) that crashed when a failed SmartArray storage adapter failed and was replaced.

At present the situation is as follows.

* None of the VSA management nodes are online

* The CMC graphical user interface is unavailable, as the failure was sudden

* The VSA management nodes are able to boot, but they do not complete their startup process.

* I am able to connect an SSH session to each of the VSA management nodes to obtain the "CLIQ> " prompt

* All commands entered at that prompt fail.

Are such documents available, or is anyone able to give me a clue?

5 REPLIES 5
Robert_Honore
Occasional Advisor

Re: Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?

Some additional information:

The StoreVirtual VSA provides implements the datastores for a vSphere 6.7 3-node cluster.   The vCenter appliance is also unavailable owing to the failure of the StoreVirtual VSA.

Rachna-K
HPE Pro
Solution

Re: Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?

@Robert_Honore - There is no troubleshooting guide available.

We have StoreVirtual VSA Installation and Configuration Guide.

 

https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-a00016289en_us

 

Can you please provide few more details to understand issue?

1. Did all the 3 Nodes running VSA go down at the same time?

2. Once hardware issues were fixed, did the VSA come back online

3. CMC is not working or is it not seeing the VSAs


Regards,
Rachna K
I am an HPE Employee

Accept or Kudo




Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
Robert_Honore
Occasional Advisor

Re: Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?

Can you please provide few more details to understand issue?

1. Did all the 3 Nodes running VSA go down at the same time?

 

Yes, all 3 nodes running VSA went down.  Not simultaneously, but they all went down within a short time when we shut down the ESXi host we needed to repair (replacing the SmartArray host bus adapter)

 

2. Once hardware issues were fixed, did the VSA come back online

 

We have not yet been able to get the VSA nodes back online, even after ensuring the replacement SmartArray adapter was working and had restored its configuration.  Although the VSA node appliances boot, the startup process they use apparently does not run to completion.

 

3. CMC is not working or is it not seeing the VSAs

 

The CMC (together with the vCenter) appliance are also virtual machines in the  environment, and they are inoperable.  We tried installing a CMC on a standalone server that was on the same network, but that CMC was unable to detect any of the VSA nodes. 

 

We are able to establish an SSH session with the VSA nodes, but none of the commands submitted at the "CLIQ>" prompt work.  They all fail with either a message about a credential mismatch, or with some other error condition.  This happens even when we log into the SSH session as user "admin".

 

 

Robert_Honore
Occasional Advisor

Manager process panics on startup of the Storevirtual VSA node

Subject changed from "Re: Is there a downloadable troubleshooting guide for HPE's StoreVirtual VSA?"

From one of the ESXi hosts, we were able to extract the following text from a file called "manager-error.txt"

I don't know how to resolve the condition indicated in these <crit> error messages.  Any assistance will be most highly appreciated.

2023-05-09T16:14:07.836 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : manager::SYS:syslog_priority_critical:setting priority to critical
2023-05-09T16:14:07.836 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : manager::
2023-05-09T16:14:07.836 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : ========== Task manager is going to abort -- monotime=38573410.984.380 realdate=Tue May 9 16:14:07 2023 ==========
2023-05-09T16:14:07.837 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : manager::PANIC:
2023-05-09T16:14:07.837 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : Corrupt unmarsh message: unmarsh_ofs=40 contained_object_length=1870004224 iovec_len=393
2023-05-09T16:14:07.837 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : [Security of your SAN/iQ Inter-NSM Network should be verified]
2023-05-09T16:14:07.838 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: :
2023-05-09T16:14:07.838 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : SYS:aborting(pid=12142,tid=12142)
2023-05-09T16:14:07.838 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:STACKTRACE:BEGIN:ptrs=64b,pid=12142,tid=12142,pthread=0x7f7e2b179d00
2023-05-09T16:14:07.838 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: :
2023-05-09T16:14:08.017 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[00] called from [127](infr/stacktrace.c:3321 ) STACKTRACE
2023-05-09T16:14:08.028 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[01] called from [127](infr/sys.c:466 ) do_abort
2023-05-09T16:14:08.039 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[02] called from [127](infr/sys.c:579 ) sys_do_panic
2023-05-09T16:14:08.050 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[03] called from [127](infr/marsh.c:519 ) unmarsh_corruption_panic
2023-05-09T16:14:08.058 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[04] called from [127](infr/marsh.h:235 ) unmarsh_corruption_check
2023-05-09T16:14:08.058 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[04] inline from [127](infr/types.c:135 ) unmarsh_string
2023-05-09T16:14:08.115 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[05] called from [131](infr/table.c:107 ) unmarsh_table_field
2023-05-09T16:14:08.124 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[06] called from [131](infr/table.c:153 (discriminator 2) ) unmarsh_table_header_flat
2023-05-09T16:14:08.133 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[07] called from [131](infr/table.c:260 ) unmarsh_table_header
2023-05-09T16:14:08.184 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[08] called from [142](manager/dbd_manager_stats.c:2317 ) volumes_stats_reply
2023-05-09T16:14:08.197 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[09] called from [142](manager/dbd_manager.c:4378 ) rpc_reply_handler
2023-05-09T16:14:08.206 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[10] called from [131](infr/rpc_client.c:344 ) deliver
2023-05-09T16:14:08.215 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[11] called from [131](infr/rpc_client.c:1294 ) rpc_client_receive_stream_done
2023-05-09T16:14:08.227 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[12] called from [142](manager/dbd_manager.c:4265 ) trans_stream_done
2023-05-09T16:14:08.236 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[13] called from [131](dbd/dbd_trans_stream.c:554 ) stream_recv
2023-05-09T16:14:08.245 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[14] called from [131](dbd/dbd_trans_stream.c:593 ) stream_recv_handler
2023-05-09T16:14:08.254 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[15] called from [127](infr/alarm_auto.h:158 (discriminator 2) ) sock_apply
2023-05-09T16:14:08.254 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[15] inline from [127](trans/real.c:689 (discriminator 2) ) poll_deliver_help
2023-05-09T16:14:08.263 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[16] called from [127](trans/real.c:767 ) poll_deliver
2023-05-09T16:14:08.263 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[16] inline from [127](trans/real.c:1129 ) poll_via_epoll
2023-05-09T16:14:08.270 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[17] called from [127](infr/alarm.c:389 ) alarm_poll_help
2023-05-09T16:14:08.277 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[18] called from [127](infr/appl.c:152 ) appl_poll
2023-05-09T16:14:08.277 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[18] inline from [127](infr/appl.c:190 ) appl_poll_drain
2023-05-09T16:14:08.283 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[19] called from [127](infr/appl.c:256 ) appl_mainloop
2023-05-09T16:14:08.289 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : U:[20] called from [000](manager/dbd_manager_app.c:1516 ) main
2023-05-09T16:14:08.289 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: :
2023-05-09T16:14:08.289 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 000 [000]: 0x000000400000-0x00000040c000 r-xp /etc/lefthand/system/dbd_manager
2023-05-09T16:14:08.289 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 098 [098]: 0x7f7e28331000-0x7f7e284f3000 r-xp /usr/lib64/libc-2.17.so
2023-05-09T16:14:08.290 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 113 [113]: 0x7f7e28fca000-0x7f7e28fe1000 r-xp /usr/lib64/libpthread-2.17.so
2023-05-09T16:14:08.290 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 127 [127]: 0x7f7e29895000-0x7f7e29988000 r-xp /etc/lefthand/system/libens.so
2023-05-09T16:14:08.290 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 131 [131]: 0x7f7e29c21000-0x7f7e29cec000 r-xp /etc/lefthand/system/libnf.so
2023-05-09T16:14:08.290 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 142 [142]: 0x7f7e2ac63000-0x7f7e2ad70000 r-xp /etc/lefthand/system/libnfmanager.so
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[12142]: <crit>: : STACKTRACE used map entry 152 [152]: 0x7ffc7ac70000-0x7ffc7acb3000 rwxp [stack]
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : U:STACKTRACE:TASK: [01]manager{}={ node=99 thread@0x7f7e2b038e60=[01manager] ABORTING proc@0x7f7e2b038080 creator=TASK[00]manager create_time=46.142.367 sys_arena=0xdbf420 mt_arena=0xdbf420 trans_local=(nil) trans_root=0xdcc480 async=0x7f7e2b03ea20 parity=0x7f7e2b0403e0}
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : U:STACKTRACE:THREAD: [01 ABORTING]manager={pid=12142 tid=12142 real_tid=12142 pgrp=12142 state=RUNNING service=MAINLOOP proc=0x7f7e2b038080 current_task=[01]manager creator=TASK[00]manager create_time=46.142.366 age=38573365.297 ready_time=46.142 done_time=0.000 IDLE_time=38496885.662 last_stat_update=38573410.984 nice_inc=0 cpu_mask=0x0 initial_sys_seed=0xad16c4ed}
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : U:STACKTRACE:END
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : STACKTRACE:sys_seed=0xfbcacd74
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : STACKTRACE:SAN/iQ Version 12.8.00.0102-64-opt SVN URL: svn://yale.lhn.com/swd-lhn-lhn/branches/stout/cluster Revision: 66751
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : STACKTRACE pid=12142: time since first stacktrace_init = 10714:49:26
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : sock_dump_all:sock_debug_dump_all(): Socket debugging compiled out
2023-05-09T16:14:08.291 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : manager::SOCK:sock_dump_all:sock_debug_dump_all(): Socket debugging compiled out
2023-05-09T16:14:08.292 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: :
2023-05-09T16:14:08.292 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : <SLAB> /PROCESS/UNSHARED/NON-MMAP ={ ltime=000 addr=0x7f7e2b13a000 size= 237568 holds=01 lock_rc= 0 dob='Thu Feb 17 05:24:42 2022' next_ltime=000 next=''} </SLAB>
2023-05-09T16:14:08.292 SVVSA-CZ3742YVT8 dbd_manager[3847]: <crit>: : STACKTRACE:Linux version 3.10.0-957.21.3.el7.x86_64 (root@stout-build.lhn.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Tue Aug 13 16:22:46 MDT 2019
2023-05-09T16:14:08.542 SVVSA-CZ3742YVT8 dbd_manager[3102]: <crit>: : manager::SYS:syslog_priority_critical:setting priority to critical
2023-05-09T16:14:08.543 SVVSA-CZ3742YVT8 dbd_manager[3102]: <crit>: : manager::
2023-05-09T16:14:08.543 SVVSA-CZ3742YVT8 dbd_manager[3102]: <crit>: : ========== Task manager is going to abort -- monotime=38573411.688.429 realdate=Tue May 9 16:14:08 2023 ==========

Rachna-K
HPE Pro

Re: Manager process panics on startup of the Storevirtual VSA node

@Robert_Honore 

Thank you for sharing the details.

This issue would need an indepth analysis and troubleshooting.

We would request you to raise a support ticket. 


Regards,
Rachna K
I am an HPE Employee

Accept or Kudo




Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company