- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- Operating System - Tru64 Unix
- >
- Mount Drive
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2007 03:06 AM
тАО09-25-2007 03:06 AM
Mount Drive
One of the drives has failed that houses the company database (a number of dat + idx files, the OS is on a sepearte drive). The failure was as follows:
AdvFS I/O error:
Volume: /dev/rz11a
Tag: 0xfffffff7.0000
Page: 191
Block: 5344
Block count: 16
Type of operation: Write
Error: 5
How do I mount the new drive then set it up as it was prior to failure?
Wihtin unix if I did a DF the drive was listed as /mydrive
I have noted in /etc/fdmns/data there is a file: rz11a
Many Thanks,
Phil.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2007 03:18 AM
тАО09-25-2007 03:18 AM
Re: Mount Drive
ls /etc/fdmns/*
This returned:
/etc/fdmns/data:
rz11a
/etc/fdmns/db:
rz9a
/etc/fdmns/db3:
rz10a
/etc/fdmns/prog:
rz12c
/etc/fdmns/test:
rz13c
/etc/fdmns/root_domain:
rz8a
/etc/fdmns/usr_domain:
rz8g
The one that has failed is:rz11a
Regards,
Phil.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2007 10:28 AM
тАО09-25-2007 10:28 AM
Re: Mount Drive
Device: SDT-9000 Bus: 0, Target: 1, Lun: 0, Type: Sequential Access
Device: RRD46 Bus: 0, Target: 5, Lun: 0, Type: Read-Only Direct Access
Device: RZ1CB-CA Bus: 1, Target: 0, Lun: 0, Type: Direct Access
Device: RZ1CB-CA Bus: 1, Target: 1, Lun: 0, Type: Direct Access
Device: RZ1CB-CA Bus: 1, Target: 2, Lun: 0, Type: Direct Access
Device: BD0096826B Bus: 1, Target: 3, Lun: 0, Type: Direct Access
Device: BB00912301 Bus: 1, Target: 4, Lun: 0, Type: Direct Access
Device: BB00912301 Bus: 1, Target: 5, Lun: 0, Type: Direct Access
the new drive is:
Device: BD0096826B Bus: 1, Target: 3, Lun: 0, Type: Direct Access
When I boot into unix and type: df the drive is not listed.
How do I label + mount the drive so that it appears as it did prior to failure as below?
Filesystem 512-blocks Used Available Capacity Mounted on
root_domain#root 524288 252002 259744 50% /
/proc 0 0 0 100% /proc
usr_domain#usr 4194304 1709346 2107616 45% /usr
mtmsprog#mtms 17773520 6005498 11707696 34% /mtms
mtmsdb#mtmsdb 8380080 1074670 7290608 13% /mtmsdb
mtmsdata#db1 17773520 12337668 5384544 70% /mtmsdb1
mtmstest#db2 17773520 4354378 13371952 25% /mtmsdb2
mtmsdb3#mtmsdb3 8380080 67392 8269984 1% /mtmsmisc
The drive that has now been replaced is:
mtmsdata#db1 17773520 12337668 5384544 70% /mtmsdb1
etc/fstab as follows:
/dev/rz8b swap1 ufs sw 0 2
root_domain#root / advfs rw 1 0
/proc /proc procfs rw 0 0
usr_domain#usr /usr advfs rw 1 0
mtmsprog#mtms /mtms advfs rw 1 0
mtmsdb#mtmsdb /mtmsdb advfs rw 1 0
mtmsdata#db1 /mtmsdb1 advfs rw 1 0
mtmstest#db2 /mtmsdb2 advfs rw 1 0
mtmsdb3#mtmsdb3 /mtmsmisc advfs rw 1 0
If you require additional information please ask.
Help / Advice appreciated.
Regards,
Phil.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2007 07:48 PM
тАО09-25-2007 07:48 PM
Re: Mount Drive
# ls -lR /etc/fdmns
# disklabel -r /dev/rrz12c
# disklabel -r /dev/rrz9c
# disklabel -r /dev/rrz11c
Did you remove faulty drive and install new one at same slot that belonged to faulty drive?
Not sure if I understand what steps you did.
Can you describe steps you did with more details?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2007 08:08 PM
тАО09-25-2007 08:08 PM
Re: Mount Drive
Yes I had replaced the drive in the same bay. disklabel would not run. I was talked through a number of commands to resolve the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2007 12:52 AM
тАО09-26-2007 12:52 AM
Re: Mount Drive
Could you list the steps that resolved this.
My many thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2007 01:25 AM
тАО09-26-2007 01:25 AM
Re: Mount Drive
Do you have a similar problem? I will try to post back today, just recovering the system from tape.
Phil.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-26-2007 01:38 AM
тАО09-26-2007 01:38 AM
Re: Mount Drive
Currently I have no issues but if I do, it would nice to know the steps.
I'm glad your up and running!
Stay well
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-30-2007 07:15 PM
тАО09-30-2007 07:15 PM
Re: Mount Drive
This example is for the replacement of a non-bootable rz11 disk. You can simply modify the rz# for other disk #'s
1) Install a New Disc to same bay as failed drive:
2) scu >
3) scan edt #list discs (informational)
4) show edt # show discs identify SCSI ID of new disc (informational)
5) disklabel -wr /dev/rrz11a # This writes a new disklabel on rz11
6) mkfdmn -0 -x2048 /dev/rz11c mydomain #create file domain using a currently used domain name
7) mkfset mydomain db1#create fileset name
8) mount /mydomaindb1 #mount disc
Hope this is of help to someone.
Regards,
Phil.