<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MSA1000 on VMS in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531979#M68115</link>
    <description>Jan wrote . . .&lt;BR /&gt;&lt;BR /&gt;Yes, it is about time for Engeneering to find some clever way to lift another never-expected limit!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;It's not likely to happen unless a fair number of customer complain, and state *why* they must have logical volumes larger than 1TB.&lt;BR /&gt;It would be a very big project, and it's not clear to product management that it's a worthwhile thing to do -- there are a lot of interesting projects, and don't have the staffing to do everything.  As of now, the feeling is that 1TB is sufficient.&lt;BR /&gt;&lt;BR /&gt;If you disagree, please make an official plea to your support centres!  Posting notes in public fora like here and comp.os.vms does not help; these places are not viewed with&lt;BR /&gt;any regularity by product management.&lt;BR /&gt;&lt;BR /&gt;Your voices on this matter count more than ours!&lt;BR /&gt;&lt;BR /&gt;                    -- Rob&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 26 Apr 2005 21:57:12 GMT</pubDate>
    <dc:creator>Robert Brooks_1</dc:creator>
    <dc:date>2005-04-26T21:57:12Z</dc:date>
    <item>
      <title>MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531972#M68108</link>
      <description>Hi, After several frustrating days struggling with an MSA100 + SAN switch 2/8 on a 2P ES47 I have finally managed to copy a bootable image of patched OpenVMS 7.3-2 onto the mirrored system disk in the MSA1000. However, there is also a 7 disk RAID 5 set in the MSA which VMS can see (with show dev dga) but is unable to init. init $1$DGA200: users gets me an error message '%INIT-F-IVADDR, invalid media address'. HP tech support suggest that I haven't set up a connection for this drive but I believe from every thing I've read, in that atrociously badly written manual WWIDMGR Users' Manual, that I only need to set up a connection for a bootable disk. I can't fathom how to set up two connections to one single port HBA using the CLI command set. Anyone got any ideas? I could attach all my console captures if that would help.&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;&lt;BR /&gt;Chris Smith</description>
      <pubDate>Tue, 26 Apr 2005 06:19:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531972#M68108</guid>
      <dc:creator>Chris Smith_23</dc:creator>
      <dc:date>2005-04-26T06:19:56Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531973#M68109</link>
      <description>Hello Chris,&lt;BR /&gt;&lt;BR /&gt;first thing I would check is whether the Connection profile is set to OpenVMS:&lt;BR /&gt;&lt;BR /&gt;CLI&amp;gt; show connections&lt;BR /&gt;Connection Name: xyz_PGA0&lt;BR /&gt;___Host WWNN = 20000000-C9382xyz&lt;BR /&gt;___Host WWPN = 10000000-C9382xyz&lt;BR /&gt;___Profile Name = OpenVMS&lt;BR /&gt;___Unit Offset = 0&lt;BR /&gt;___Controller 1 Port 1 Status = Online&lt;BR /&gt;&lt;BR /&gt;::one line&lt;BR /&gt;CLI&amp;gt; add connection abc_PGB0 WWPN=10000000-C9380abc profile=openvms&lt;BR /&gt;::one line&lt;BR /&gt;Connection has been added successfully.&lt;BR /&gt;Profile openvms is set for the new connection.&lt;BR /&gt;&lt;BR /&gt;CLI&amp;gt;&lt;BR /&gt;&lt;BR /&gt;You're right that you only configure a console device with WWIDMGR for boot and dump purpose, but not for all disks (there isn't even enough space to configure more than 4 LUNs...).</description>
      <pubDate>Tue, 26 Apr 2005 07:48:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531973#M68109</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-26T07:48:51Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531974#M68110</link>
      <description>Hi Ewi, As you can see from the attachment, I have a valid connection set up. The attachment also shows the output from a 'show dev dga/full'. Both logical drives are visible but the attempt to initialise the 2nd drive fails.&lt;BR /&gt;&lt;BR /&gt;I have been told that the boot device must be dga0: otherwise the 2nd drive will not be visible. Is that the case?&lt;BR /&gt;&lt;BR /&gt;Cheers, Chris</description>
      <pubDate>Tue, 26 Apr 2005 10:40:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531974#M68110</guid>
      <dc:creator>Chris Smith_23</dc:creator>
      <dc:date>2005-04-26T10:40:47Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531975#M68111</link>
      <description>"Ewi"? Whow, you have a strange keyboard mapping or is this line noise ;-)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Is there a chance that the second logical disk is larger than 1 TeraByte&lt;BR /&gt;(&amp;gt;= 2^31 blocks)? Unfortunately, that doesn't work on OpenVMS :-(&lt;BR /&gt;&lt;BR /&gt;Otherwise, you really need a certain patch level on OpenVMS so that it works with the latest firmware revision on the MSA1000.&lt;BR /&gt;&lt;BR /&gt;And no, you have been told wrong. I have implemented several OpenVMS systems that boot from devices other than DGA0: (one is a cluster that boots from DGA101:) and they happily detect the remaining devices.&lt;BR /&gt;&lt;BR /&gt;You might have to assign identifiers to the controllers, too - the documentation in this area I have seen so far is conflicting. Some says you need it - some says not needed.&lt;BR /&gt;&lt;BR /&gt;If I recall correctly, the command is:&lt;BR /&gt;CLI&amp;gt; set this_controller_id 201&lt;BR /&gt;CLI&amp;gt; set other_controller_id 202</description>
      <pubDate>Tue, 26 Apr 2005 11:06:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531975#M68111</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-26T11:06:11Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531976#M68112</link>
      <description>Sorry Uwe, I was repeating your name to myself as I brought the reply page up and by the time I came to type in my message I only had a phonetic representation of your name in my memory. Symptom of incipient old age!&lt;BR /&gt;&lt;BR /&gt;OK, so I don't have to reconfigure the boot drive to DGA0: Thanks. But, yes the other drive is 7 * 300GB in RAID 5 so approx 6 * 300GB or 1.8TB. Someone else suggested that as the drive  had not be written to by the time I issued the init command, the actual initialisation of the RAID set would take quite a long time and that if I was to try it again now it could be completed and I wouldn't see the problem. I can't get along to the site until Thursday morning so I can't test that theory very easily.&lt;BR /&gt;&lt;BR /&gt;Likewise your thoughts on controller ids.&lt;BR /&gt;&lt;BR /&gt;I'll look out the patches. I've already had to apply 4 patches for the fibre HBA&lt;BR /&gt;vms732_pcsi-v0100&lt;BR /&gt;vms732_update-v0300&lt;BR /&gt;vms732_sys-v0700&lt;BR /&gt;vms732_fibre_scsi-v0400&lt;BR /&gt;(in that order) with rebbots after the 2nd, 3rd &amp;amp; 4th patches.&lt;BR /&gt;&lt;BR /&gt;I will report back any progress after my site visit on Thursday.&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;&lt;BR /&gt;Chris</description>
      <pubDate>Tue, 26 Apr 2005 11:29:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531976#M68112</guid>
      <dc:creator>Chris Smith_23</dc:creator>
      <dc:date>2005-04-26T11:29:11Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531977#M68113</link>
      <description>Chris,&lt;BR /&gt;&lt;BR /&gt;as Uwe expected and you confirmed: the RAID set is simply to big. NO way to get it working!&lt;BR /&gt;You will _HAVE_ to split it up into units of &amp;lt; 1 T.&lt;BR /&gt;&lt;BR /&gt;Yes, it is about time for Engeneering to find some clever way to lift another never-expected limit!&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe &lt;BR /&gt;</description>
      <pubDate>Tue, 26 Apr 2005 11:38:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531977#M68113</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-04-26T11:38:29Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531978#M68114</link>
      <description>No offense taken. I understand that my name can be troublesome to spell for non-germans.&lt;BR /&gt;&lt;BR /&gt;Yes, that's too large for OpenVMS. You can, however, keep the disk array and create _two_ logical disks on those 7 physical disks, each about 900 GigaBytes. Then you could create a multivolume set on OpenVMS. The parity initialization should not affect the ability to write on the logical disk. It is rather the opposite: the MSA1000 is waiting for a first write from the host before it starts the initialization!</description>
      <pubDate>Tue, 26 Apr 2005 11:39:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531978#M68114</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-26T11:39:06Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531979#M68115</link>
      <description>Jan wrote . . .&lt;BR /&gt;&lt;BR /&gt;Yes, it is about time for Engeneering to find some clever way to lift another never-expected limit!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;It's not likely to happen unless a fair number of customer complain, and state *why* they must have logical volumes larger than 1TB.&lt;BR /&gt;It would be a very big project, and it's not clear to product management that it's a worthwhile thing to do -- there are a lot of interesting projects, and don't have the staffing to do everything.  As of now, the feeling is that 1TB is sufficient.&lt;BR /&gt;&lt;BR /&gt;If you disagree, please make an official plea to your support centres!  Posting notes in public fora like here and comp.os.vms does not help; these places are not viewed with&lt;BR /&gt;any regularity by product management.&lt;BR /&gt;&lt;BR /&gt;Your voices on this matter count more than ours!&lt;BR /&gt;&lt;BR /&gt;                    -- Rob&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 26 Apr 2005 21:57:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531979#M68115</guid>
      <dc:creator>Robert Brooks_1</dc:creator>
      <dc:date>2005-04-26T21:57:12Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531980#M68116</link>
      <description>Robert,&lt;BR /&gt;&lt;BR /&gt;thanks for your explanation.&lt;BR /&gt;&lt;BR /&gt;From my (our) perspective, the 1 T is not now a problem, since we do not, and do not intend, to bind disks into big RAID sets.&lt;BR /&gt;&lt;BR /&gt;But, looking at the real world around us with some extrapolation into the future should also be done sometimes by Engeneering.&lt;BR /&gt;&lt;BR /&gt;Look at the figures for the biggest available _SINGLE_ disks in recent years.  And place your bet at the date when THAT number will pass 1 T...&lt;BR /&gt;By that time, it would be very desirable for the whole VMS community if those devices were not ruled out! &lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one ion me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Wed, 27 Apr 2005 04:26:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531980#M68116</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-04-27T04:26:56Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531981#M68117</link>
      <description>Hi Uwe &amp;amp; Jan, Thanks for your input. I split the 1.8TB raid into 2 x 0.9TB (approx) sets and VMS is quite happy with that. show dev dg/full gets me the actual number of blocks while show dev dg displays a series of asterisks for the size. Presumably the field specifier for size has overflowed. That doesn't seem to matter as I can now initialise and mount the two drives. I will leave it to the users whether they want the two drives combined into one logical volume. I'm just happy to have the system running to a point where we can load all the users files and accounts.&lt;BR /&gt;Thanks again &amp;amp; cheers,&lt;BR /&gt;Chris</description>
      <pubDate>Thu, 28 Apr 2005 06:12:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531981#M68117</guid>
      <dc:creator>Chris Smith_23</dc:creator>
      <dc:date>2005-04-28T06:12:37Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531982#M68118</link>
      <description>Remember only to bind an empty volume to the root volume of a multivolume set.</description>
      <pubDate>Thu, 28 Apr 2005 07:10:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531982#M68118</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-28T07:10:40Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531983#M68119</link>
      <description>Understood. Thanks.</description>
      <pubDate>Thu, 28 Apr 2005 07:18:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531983#M68119</guid>
      <dc:creator>Chris Smith_23</dc:creator>
      <dc:date>2005-04-28T07:18:59Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531984#M68120</link>
      <description>On 7.3-2 and later a SET PROC/UNIT=BYTES may help with the show dev/nofull&lt;BR /&gt;&lt;BR /&gt;   Tim</description>
      <pubDate>Thu, 28 Apr 2005 19:10:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531984#M68120</guid>
      <dc:creator>Tim Hughes_3</dc:creator>
      <dc:date>2005-04-28T19:10:23Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531985#M68121</link>
      <description>Well, just had a phone conversation with _another_ customer who barely went over that limit (RAID-5 set with 1.1 TB). They intend to freeze their VMS system (does that sound familiar?), so they will not profit from any future work (should that _ever_ happen), but I have suggested that he complain in the interest of other users in the future. We'll see...</description>
      <pubDate>Fri, 29 Apr 2005 04:56:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/msa1000-on-vms/m-p/3531985#M68121</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-04-29T04:56:33Z</dc:date>
    </item>
  </channel>
</rss>

