Operating System - HP-UX
1753263 Members
5031 Online
108792 Solutions
New Discussion юеВ

tar cannot open /dev/rmt/0mn

 
Robert-Jan Goossens
Honored Contributor

Re: tar cannot open /dev/rmt/0mn

Hi,

could you post the output of

# file /dev/rmt/0m
# ll /dev/rmt/0m

Regards,
Robert-Jan
Rohit Nagre
Occasional Advisor

Re: tar cannot open /dev/rmt/0mn

Dear mark
it shows me list of file in tape .
Rohit Nagre
Occasional Advisor

Re: tar cannot open /dev/rmt/0mn

output as follows
# file /dev/rmt/0m
/dev/rmt/0m: character special (205/143360)
# ll /dev/rmt/0m
crwxrwxrwx 2 bin bin 205 0x023000 Apr 5 05:31 /dev/rmt/0m
MarkSyder
Honored Contributor

Re: tar cannot open /dev/rmt/0mn

So it can read from a tape but not write to it. Is the tape write protected?

Mark
The triumph of evil requires only that good men do nothing
Dennis Handly
Acclaimed Contributor

Re: tar cannot open /dev/rmt/0mn

>Pete: I rarely even bother to look at attachments because they are generally posted as Word documents.

In this case it was a .txt file. I later realized it was ioscan -funC tape if that made a difference.
One advantage of .txt attachments is that you do see the proper spacing.

>Being a HP-UX bigot, I browse on hp-ux

I too.
AL_3001
Regular Advisor

Re: tar cannot open /dev/rmt/0mn

Hi Rohit,

As yu can read the content from the tape, and not write to it, the first thing that comes to my mind, just the may Mark has said "Is the tape write protected"? In most cases it is, so may be your problem is solved. If it is not write protected, let us know.

Cheers,
Ashish
Akif_1
Super Advisor

Re: tar cannot open /dev/rmt/0mn

Hi Rohit,

Hope this solution will solve your problem.


#ioscan -fnC tape

go to the /dev/rmt/ dir and delete all special files listed (be sure that you are in /dev/rmt/ directory).

Note: make sure about your tape device path after hitting above command:

#cd /dev/rmt
#rmsf -H 1/0/1/0/0/0.3.0 ..... path example

#ioscan -fnC tape

#insf -H 1/0/1/0/0/0.3.0 ....... path example


All the best
T(ogether) E(very one) A(chive) M(ore)
sparcguy
New Member

Re: tar cannot open /dev/rmt/0mn

I encountered this issue also and tried google and came across the thread, I tried all the abv suggested but nothing worked

 

Putting info here in case somebody googling for it.

 

my case was I was trying to do a solaris ufsdump from my sun server to a hpux server connect to a LTO3 tape drive.

 

ssh usgs0131_bk ufsdump 0ucf - /ist | dd bs=126b of=/dev/rmt/0mn

 

on the solaris server it kept saying cannot open /dev/rmt/0mn

 

So by elimination I tried troubleshoot on hpux server and tried using tar command and got the same error.

 

There's a useful command you need to know and that's mt, when I ran the mt command it showed it was write-protected. Basically I later learnt from another colleague that my LTO3 drive is not backward compatiable to LTO1 cartridge and that was the problem.

 

-- This is what you should see

 

root@USGS0127[/]#mt -f /dev/rmt/0mn stat
Drive:  HP Ultrium 3-SCSI
Format:
Status: [41114400] BOT online compression immediate-report-mode
File:   0
Block:  0

-- I got the write protect message using LTO1 cartirdge even though it wasn't write protected

 

root@USGS0127[/]#mt -f /dev/rmt/0mn stat
Drive:  HP Ultrium 3-SCSI
Format:
Status: [41114400] Write-Protected immediate-report-mode
File:   0
Block:  0

After I changed to the correct tape LTO3 or LTO2 it work

 

root@USGS0127[/]#ssh usgs0131_bk ufsdump 0ucf - /ist | dd bs=126b of=/dev/rmt/0mn

 

  DUMP: Date of this level 0 dump: Fri Jul 11 23:18:06 2014
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/md/rdsk/d3 (usgs0131:/ist) to standard output.
  DUMP: Mapping (Pass I) [regular files]
  DUMP: Mapping (Pass II) [directories]
  DUMP: Writing 63 Kilobyte records
  DUMP: Estimated 5145510 blocks (2512.46MB).
  DUMP: Dumping (Pass III) [directories]
  DUMP: Dumping (Pass IV) [regular files]
  DUMP: 5145460 blocks (2512.43MB) on 1 volume at 9080 KB/sec
  DUMP: DUMP IS DONE
  DUMP: Level 0 dump on Fri Jul 11 23:18:06 2014
0+346148 records in
0+346148 records out

 

root@USGS0127[/]#