- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- RAID problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-15-2004 07:59 PM
тАО03-15-2004 07:59 PM
I am trying to configure a RAID5 disk using ORCA. I have 14 146.8 GB disks, a
5302 controller in a DS15 server, running OpenVMS 7.3-1.
I configure it as one big raid with all disks included. I get a 1777.6 GB RAID.
When I try to initialize the RAID in OpenVMS I get a message from system:
%INIT-F-IVADDR, invalid media address
sho dev/full works well
---------------
Disk XXXXXX$DKC0:, device type COMPAQ LOGICAL VOLUME, is online, file-oriented
device, shareable, served to cluster via MSCP Server, error logging is
enabled.
Error count 0 Operations completed 9
Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W
Reference count 0 Default buffer size 512
Total blocks 3727729008 Sectors per track 255
Total cylinders 57328 Tracks per cylinder 255
---------------
Any clue?
Regards, Ola
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-15-2004 08:25 PM
тАО03-15-2004 08:25 PM
SolutionAFAIK, there is a limit of 1G blocks/volume.
That gives 0.5 Tb = 512 G max.
Seems you will have to split up into "small" enough sizes....
Maybe someone in engeneering can conform or contradict this?
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-15-2004 11:58 PM
тАО03-15-2004 11:58 PM
Re: RAID problem
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2004 02:28 AM
тАО03-16-2004 02:28 AM
Re: RAID problem
did you ever consider backup or worse restore of this 1.7 TB volume you intend to create? You really might be much better off with a few slightly smaller volumes.
Greetings, Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2004 02:41 AM
тАО03-16-2004 02:41 AM
Re: RAID problem
thank you very much for answers.
Regards, Ola
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2004 03:22 AM
тАО03-16-2004 03:22 AM
Re: RAID problem
- build multiple smaller arrays and carve one logical disk from each array. Of course, the RAID overhead will increase and that leaves less data for the user.
- keep one large array and carve multiple logical disks from it. I guess you will loose a little space for additional meta data, but this is nothing in comparison to the first choice. There is one downside with this idea that I have seen on the MSA1000 storrage array that is based on Smart Array technology, too. I have no direct experience with the SA5302A controller, but I beleive it has that limitation, too:
If you create multiple logical disks within an array, then you can only delete them in reverse order.
Anyway. Please let us know how you decided and share your experience with us!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-16-2004 02:09 PM
тАО03-16-2004 02:09 PM
Re: RAID problem
The EVA will make up to a 2TB LUN, but a VMS system will not recognize anything larger than 1023GB. 1024 doesn't work... we tried it. :-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-17-2004 03:39 AM
тАО03-17-2004 03:39 AM
Re: RAID problem
Who would have thought about such big volumes about 10 years ago? It took until VAX/VMS V6.0 that the volume size was limited to 2**24 blocks (8.5 GBytes).
If I recall correctly that was due to a limit in the Volume Control Block - 3*8=24 and the upper byte of the longword was used for something else.
Several people ran into problems when they tried 9.1 GByte disks on VAX/VMS V5.5-2 and filled them up near the limit. There was a wrap-around and data got corrupted or the system crashed.