StoreEver Tape Storage

Re: ultrium 7 chronic "failed to read partition labels"

 
pkolbe
Occasional Visitor

ultrium 7 chronic "failed to read partition labels"

I must be doing something wrong. One in every 3 tapes fails to re-mount after copying to them.

My process looks like this: 

mkltfs --device=/dev/sg0
ltfs -o devname=/dev/sg1 /exports/tape7
rsync -avhi /mnt/nfsvolume/2tb-of-stuff /exports/tape7/
ls /exports/tape7/2tb-of-stuff
# confirm it worked sync umount /exports/tape7 #[manually eject tape and re-insert] ltfs -o devname=/dev/sg1 /exports/tape7

 

and then I can't mount and I get this message:

root@machine: # ltfs -o devname=/dev/sg0 /exports/tape7
3c35 LTFS14000I LTFS starting, HPE StoreOpen Standalone version 3.2.0, log level 2
3c35 LTFS14058I LTFS Format Specification version 2.2.0
3c35 LTFS14104I Launched by "ltfs -o devname=/dev/sg0 /exports/tape7"
3c35 LTFS14105I This binary is built for Linux (x86_64)
3c35 LTFS14106I GCC version is 4.4.4 20100726 (Red Hat 4.4.4-13)
3c35 LTFS17087I Kernel version: Linux version 2.6.32-642.15.1.el6.x86_64 (mockbuild@c1bm.rdu2.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) ) #1 SMP Fri Feb 24 14:31:22 UTC 2017 x86_64
3c35 LTFS17089I Distribution: CentOS release 6.8 (Final)
3c35 LTFS17089I Distribution: CentOS release 6.8 (Final)
3c35 LTFS17089I Distribution: CentOS release 6.8 (Final)
3c35 LTFS17089I Distribution: LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
3c35 LTFS14063I Sync type is "time", Sync time is 300 sec
3c35 LTFS17085I Plugin: Loading "ltotape" driver
3c35 LTFS17085I Plugin: Loading "unified" iosched
3c35 LTFS20013I Drive type is HP LTO7, serial number is Confidential Info Erased
3c35 LTFS17160I Maximum device block size is 524288
3c35 LTFS11005I Mounting the volume
3c35 LTFS11175E Cannot read ANSI label: expected 80 bytes, but received 4096
3c35 LTFS11170E Failed to read label (-1012) from partition 0
3c35 LTFS11009E Cannot read volume: failed to read partition labels.
3c35 LTFS14013E Cannot mount the volume
3c35 LTFS20076I Triggering drive diagnostic dump
3c35 LTFS20096I Diagnostic dump complete

Any idea? It's happening way too much.

thx,

-peter

2 REPLIES 2
Shaan_M
HPE Pro

Re: ultrium 7 chronic "failed to read partition labels"

Hello @pkolbe,

It seems like the OS is not scanning the drives/ not able to scan those drives and mount them. Remove the cartridges that is not able to mount

Maybe they are bad, However to confirm that we may need to have a support ticket colleceted via L&TT.

https://support.hpe.com/hpesc/public/home/driverHome?sp4ts.oid=1009019479

Please find the link to download L&TT with respect to OS .

Regards
Shaan

I am an HPE Employee
Register a Kudo by clicking the thumb if this helped in your issue.
Please consider marking it as an Accepted Solution if issue is resolved.

 

 

Accept or Kudo

Gans_Tapes
HPE Pro

Re: ultrium 7 chronic "failed to read partition labels"

Hello @pkolbe

To add to Shaan's feedback, this is either a hardware issue (drive being unable to read barcode information or a suspect  bad tape (or batch of tapes) Or it could be related to inconsistent index partition. 

 In normal usage, HPE StoreOpen manages the tape contents automatically with no particular user intervention beyond configuring and using the volume. However when unexpected problems arise it may become necessary to take action in order to check the integrity of the volume and potentially to repair a damaged volume. Some of these problematic events include:

• Unexpected loss of power to the tape drive

• Interrupting communications with the drive (for example unplugging the SAS cable)

• Operating System crash or abnormal shutdown

The scale of damage to the volume will depend on what was happening at the time the problem occurred. If the volume had not been modified since mounting, then in most circumstances there will be no damage. If the volume had been modified and a current copy of the index updated on tape, then although the volume may be inconsistent, the HPE StoreOpen software will normally be able to restore consistency and so recover the volume when it is next mounted. The worst cases occur when there is no current copy of the index on tape, or when the drive is actually writing at the time of the unexpected event.  

Could you confirm if you are using HPE StoreOpen Automation or HPE StoreOpen Standalone or a third party utility to read/write to the LTFS tapes.

Is the LTO drive a part of a library or a standalone drive?

To verify if this is a hardware issue, the LTFS tape may need to be re-initialize as non-LTFS tape. Then, attempt a Drive assessment and/or Read/Write test via Library and Tape Tools (https://support.hpe.com/hpesc/public/home/driverHome?sp4ts.oid=1009019479)  to validate the drive and media health.  Re-initializing tape means you would lose the data on that tape. Additionally, log files are available under /var/log/messages and /var/log/ltfs_date and timestamp_driveSN.ltd  on a Linuxhost.  An alternative is to try using a non-LTFS tape and then running the tests via L&TT utility. 

ltfscap -m  barcode  --device=/dev/sg0/exports/tape7      >> To report capacity of the tape with barcode

There are several other ltfsck options that could be used to verify integrity of an LTFS volume. You may refer to the HPE StoreOpen Automation User guide for the same.  

There is another thread for a similar issue here.

Reference documents:

HPE StoreOpen LTFS Best Practices -  https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4AA5-1230ENW&cc=us&lc=en 

HPE StoreOpen Automation user guide (ltfsck and recovery options) - http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c04991590-1.pdf


I work for HPE

Accept or Kudo