<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MS Cluster problem in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769769#M18688</link>
    <description>Hi Matthijjs,&lt;BR /&gt;&lt;BR /&gt;I tried to make move group again and at the second node i get the following error:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Source: ClusSVC&lt;BR /&gt;Event ID: 1069&lt;BR /&gt;Description: Cluster resource "storage" failed.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;There are no more messages detailing the problem about cluster error.&lt;BR /&gt;&lt;BR /&gt;After the Cluster IP Address resource fail i went to the second cluster node and try to open the Cluster Administration tool and i get the error:&lt;BR /&gt;&lt;BR /&gt;"The cluster service on node "CVTPR01" cannot be started. Error Id: 1722"&lt;BR /&gt;&lt;BR /&gt;I checked the WINS and DHCP resource configuration and everything looks ok.&lt;BR /&gt;&lt;BR /&gt;The errors above means something to you?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
    <pubDate>Tue, 25 Apr 2006 11:03:52 GMT</pubDate>
    <dc:creator>Bruno Pina</dc:creator>
    <dc:date>2006-04-25T11:03:52Z</dc:date>
    <item>
      <title>MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769762#M18681</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I'm having problems with the MS cluster 2K.&lt;BR /&gt;When i try to make a move group from one node to another i get failed IP address resource.&lt;BR /&gt;I already checked the microsoft support and nothing helps to solve the problem.&lt;BR /&gt;The nodes are connected to an MSA1000. At the event viewer i got warnings from Foundation Agents saying that "The Host Remote Alerter detected an error while attempting to retrieve data from key = CompaqHost\Cluster\Component\ResourceTable in the registry.  The data contains the error code."&lt;BR /&gt;Could anyone help me?&lt;BR /&gt;Sorry about the poor english</description>
      <pubDate>Tue, 11 Apr 2006 06:07:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769762#M18681</guid>
      <dc:creator>Bruno Pina</dc:creator>
      <dc:date>2006-04-11T06:07:50Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769763#M18682</link>
      <description>Was the cluster working? It seems that It cannot read some registry keys, try using ClusterRecovery, available on the Resource kit.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.microsoft.com/downloads/details.aspx?FamilyID=2be7ebf0-a408-4232-9353-64aafd65306d&amp;amp;displaylang=en" target="_blank"&gt;http://www.microsoft.com/downloads/details.aspx?FamilyID=2be7ebf0-a408-4232-9353-64aafd65306d&amp;amp;displaylang=en&lt;/A&gt;</description>
      <pubDate>Tue, 11 Apr 2006 08:23:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769763#M18682</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-04-11T08:23:58Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769764#M18683</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I didn't try cluster recovery tool but i made a test using clusdiag.exe and i couldn't get to any conclusion cause the test stopped when the application tried to move group.&lt;BR /&gt;When i move group i get "Cluster IP Address" failed and the resource that contain the disks "storage" goes to online pending.&lt;BR /&gt;At the first node cluster.log i have the following messages:&lt;BR /&gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpDoMoveGroup: Entry&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpMoveGroup: Entry&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpMoveGroup: Moving group 00fdd572-f069-410f-9210-f2405262a04c to node 2 (2)&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResource: Offline resource &lt;CLUSTER ip="" address=""&gt; &lt;E3C953E9-FCB8-4617-8EEB-FCB5A8105639&gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResource: CVTPRCL01 depends on Cluster IP Address. Shut down first.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResource: Offline resource &lt;CVTPRCL01&gt; &lt;B6E4FC7A-90D4-4494-B29F-E0693000ED29&gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResource: dhcp depends on CVTPRCL01. Shut down first.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpOfflineResource: Offline resource &lt;DHCP&gt; &lt;B18E5E6E-E69B-4AE0-A913-9AEE92CF5D2E&gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpRmOfflineResource: InterlockedIncrement on gdwQuoBlockingResources for resource b18e5e6e-e69b-4ae0-a913-9aee92cf5d2e&lt;BR /&gt;00000af4.000007f8::2006/04/18-19:21:25.117 DHCP Service &lt;DHCP&gt;: Offline request.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [FM] FmpRmOfflineResource: RmOffline() for b18e5e6e-e69b-4ae0-a913-9aee92cf5d2e returned error 997&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] GumSendUpdate:  Locker waiting  type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] Thread 0x1514 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] GumSendUpdate: Locker dispatching seq 236363 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] GumSendUpdate: Dispatching seq 236363 type 0 context 8 to node 2&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.117 [GUM] GumSendUpdate: Locker updating seq 236363 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate: completed update seq 236363 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpPropagateResourceState: resource b18e5e6e-e69b-4ae0-a913-9aee92cf5d2e pending event.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpOfflineResource for CVTPRCL01 marked as waiting.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate:  Locker waiting  type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] Thread 0x1514 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate: Locker dispatching seq 236364 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate: Dispatching seq 236364 type 0 context 8 to node 2&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate: Locker updating seq 236364 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [GUM] GumSendUpdate: completed update seq 236364 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpPropagateResourceState: resource b6e4fc7a-90d4-4494-b29f-e0693000ed29 pending event.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpOfflineResource: CVTPRFS01 depends on CVTPRCL01. Shut down first.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpOfflineResource: Offline resource &lt;CVTPRFS01&gt; &amp;lt;8b4d3234-3023-4830-abb6-99abd9544b6f&amp;gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpOfflineResource: Home Dirs depends on CVTPRFS01. Shut down first.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpOfflineResource: Offline resource &lt;HOME dirs=""&gt; &lt;B3C3E79C-BD85-4314-8A3A-214E18C4923C&gt;&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.132 [FM] FmpRmOfflineResource: InterlockedIncrement on gdwQuoBlockingResources for resource b3c3e79c-bd85-4314-8a3a-214e18c4923c&lt;BR /&gt;00000af4.000014e0::2006/04/18-19:21:25.132 DHCP Service &lt;DHCP&gt;: OfflineThread: retrying...&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [FM] FmpRmOfflineResource: RmOffline() for b3c3e79c-bd85-4314-8a3a-214e18c4923c returned error 997&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate:  Locker waiting  type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] Thread 0x1514 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Locker dispatching seq 236365 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Dispatching seq 236365 type 0 context 8 to node 2&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Locker updating seq 236365 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: completed update seq 236365 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [FM] FmpPropagateResourceState: resource b3c3e79c-bd85-4314-8a3a-214e18c4923c pending event.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [FM] FmpOfflineResource for CVTPRFS01 marked as waiting.&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate:  Locker waiting  type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] Thread 0x1514 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Locker dispatching seq 236366 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Dispatching seq 236366 type 0 context 8 to node 2&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: Locker updating seq 236366 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [GUM] GumSendUpdate: completed update seq 236366 type 0 context 8&lt;BR /&gt;0000036c.00001514::2006/04/18-19:21:25.148 [FM] FmpPropagateResourceState: resource 8b4d3234-3023-4830-abb6-99abd9544b6f pending event.&lt;BR /&gt;&lt;BR /&gt;Could someone explain the error 997?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.460 [FM] FmpOfflineResource: Offline resource &lt;STORAGE&gt; returned pending&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.460 [FM] FmpCompleteMoveGroup: Exit, status = 997&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.492 Physical Disk: PnP Event GUID_IO_VOLUME_MOUNT for K (Partition4) received.&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.601 Physical Disk: PnP Event GUID_IO_VOLUME_DISMOUNT for 544504 received&lt;BR /&gt;00000af4.00001190::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: [DiskArb]Successful read  (sector 12) [CVTPRDC01:5508] (0,d71f7786:01c662f4).&lt;BR /&gt;00000af4.00001190::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: [DiskArb]Successful write (sector 12) [:0] (0,00000000:00000000).&lt;BR /&gt;00000af4.00001190::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: [DiskArb] StopPersistentReservations is complete.&lt;BR /&gt;00000af4.00001190::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: DisksDismountDrives: letter mask is 00010580.&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.601 Physical Disk: PnP Event GUID_IO_VOLUME_DISMOUNT for 560816 received&lt;BR /&gt;00000af4.00001190::2006/04/18-19:21:35.601 [RM] RmpSetResourceStatus, Posting state 3 notification for resource &lt;STORAGE&gt;&lt;BR /&gt;0000036c.00000484::2006/04/18-19:21:35.601 [FM] NotifyCallBackRoutine: enqueuing event&lt;BR /&gt;0000036c.000007fc::2006/04/18-19:21:35.601 [FM] FmpCreateResStateChangeHandler: Entry&lt;BR /&gt;0000036c.000007fc::2006/04/18-19:21:35.601 [FM] FmpCreateResStateChangeHandler: Exit, status 0&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpHandleResStateChangeProc: Entry...&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [CP] CppResourceNotify for resource storage&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpHandleResourceTransition: Resource Name = 5f953181-fd6d-4681-9498-bba0a72f1f7c old state=130 new state=3&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumSendUpdate:  Locker waiting  type 0 context 8&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] Thread 0x1190 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker dispatching seq 236621 type 0 context 8&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Dispatching seq 236621 type 0 context 8 to node 2&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker updating seq 236621 type 0 context 8&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: completed update seq 236621 type 0 context 8&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpPropagateResourceState: resource 5f953181-fd6d-4681-9498-bba0a72f1f7c offline event.&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpOfflineWaitingTree: Entry for &lt;STORAGE&gt;.&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] OfflineWaitingResourceTree: Exit, status=0 for &lt;STORAGE&gt;.&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpOfflineWaitingTree: Quorum resource is in the same group,Moving list=0x000dd430&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpOfflineWaitingTree: bring quorum resource offline&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpOfflineResource: Offline resource &lt;STORAGE&gt; &amp;lt;5f953181-fd6d-4681-9498-bba0a72f1f7c&amp;gt;&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [DM] DmpQuoObjNotifyCb: Quorum resource offline/offlinepending/preoffline&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [MM] MmSetQuorumOwner(0,0), old owner 0.&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: Stop watching disk 4bd6d2c7&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: RemoveDisk: disk 4bd6d2c7 not found&lt;BR /&gt;00000af4.00000310::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: [DiskArb] StopPersistentReservations is called.&lt;BR /&gt;00000af4.00000310::2006/04/18-19:21:35.601 Physical Disk &lt;STORAGE&gt;: [DiskArb] StopPersistentReservations is complete.&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [CP] CppResourceNotify for resource storage&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] RmTerminateResource: 5f953181-fd6d-4681-9498-bba0a72f1f7c is now offline&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpOfflineWaitingTree: returned status 0 for &lt;STORAGE&gt;.&lt;BR /&gt;0000036c.00001190::2006/04/18-19:21:35.601 [FM] FmpHandleResStateChangeProc: Exit...&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate:  Locker waiting  type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] Thread 0x8bc UpdateLock wait on Type 0&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker dispatching seq 236622 type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Dispatching seq 236622 type 0 context 11 to node 2&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker updating seq 236622 type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: completed update seq 236622 type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate:  Locker waiting  type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] Thread 0x8bc UpdateLock wait on Type 0&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker dispatching seq 236623 type 0 context 11&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.601 Physical Disk: PnP Event GUID_IO_VOLUME_DISMOUNT for 561144 received&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Dispatching seq 236623 type 0 context 11 to node 2&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: Locker updating seq 236623 type 0 context 11&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.000008bc::2006/04/18-19:21:35.601 [GUM] GumSendUpdate: completed update seq 236623 type 0 context 11&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.617 Physical Disk: PnP Event GUID_IO_VOLUME_DISMOUNT for 561128 received&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] CompleteMoveGroup: Entry for &lt;INICIAL&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] CompleteMoveGroup: Completing the move for group inicial to node 2 (2)&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CLUSTER ip="" address=""&gt; &lt;E3C953E9-FCB8-4617-8EEB-FCB5A8105639&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CVTPRCL01&gt; &lt;B6E4FC7A-90D4-4494-B29F-E0693000ED29&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;DHCP&gt; &lt;B18E5E6E-E69B-4AE0-A913-9AEE92CF5D2E&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CVTPRFS01&gt; &amp;lt;8b4d3234-3023-4830-abb6-99abd9544b6f&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;HOME dirs=""&gt; &lt;B3C3E79C-BD85-4314-8A3A-214E18C4923C&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;DADOS comuns=""&gt; &amp;lt;12bf394d-27be-436b-8316-bb8223af3325&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;WINS&gt; &lt;D675B9F6-3222-44C3-ACD0-8DAB3D2C04CC&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CVTPRPS01&gt; &amp;lt;24b12b66-a01d-4bee-9354-5fb305aef1e0&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;PRINT spooler=""&gt; &lt;FB81DBA8-9CF8-4E04-BCE2-BE8B9B4C50A4&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CLIENT&gt; &lt;B1A4A754-4043-4315-801C-D027AE9768EC&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CVTCLIENT&gt; &lt;B8B818A7-FD15-4DE2-B5FD-AD2792B4868A&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;PRD_CLIENT&gt; &lt;DE6A2347-C6F4-453A-AF92-3B2978CBE430&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;TST_CLIENT&gt; &amp;lt;31e57afd-4db8-4033-909f-84c96202b12b&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;ANTIGO as_cvt1="" share=""&gt; &lt;DA2DFF8B-0938-4F78-BFF2-BD3A7776063C&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;CLIENT scripts=""&gt; &amp;lt;0c928788-965b-4de5-a847-bad9cdca4cb5&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;IP antigo="" cluster="" as_cl1=""&gt; &amp;lt;202be12d-fb9f-4c43-b1e4-6fe60a6c5753&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;NOME antigo="" cluster="" cvt_cl1=""&gt; &amp;lt;50f861a3-6626-4994-a191-1e7ec61527b4&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;IP antigo="" as_cvt1=""&gt; &amp;lt;48467751-80c6-493d-9ca2-e729001f0616&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;NOME antigo="" as_cvt1=""&gt; &lt;B43F4CB1-470F-40F8-B3D3-D53541476A62&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;EPV1.0&gt; &amp;lt;2e8d57ff-3a15-4b49-9a85-1f676af74027&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;QUOTA&gt; &amp;lt;00697772-7ff3-4ed5-8bdd-f3cb0e0da3a8&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;IP-INTERNO-BACKUPS&gt; &lt;A7ACE53E-30C8-413F-97E3-4F50282101FD&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;FORMS do="" client=""&gt; &amp;lt;6a4efabb-ca18-4689-9207-ce63a37c97d6&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;BCK_CVTPRCL01&gt; &amp;lt;41e5e2d3-aafa-40a6-8756-d18dbd34758b&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring non quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;NOME_IP_BACKUP&gt; &lt;B63398C3-997F-4FB6-BB36-0678D42FD741&gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResourceList: Bring quorum resource offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpOfflineResource: Offline resource &lt;STORAGE&gt; &amp;lt;5f953181-fd6d-4681-9498-bba0a72f1f7c&amp;gt;&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [DM] DmpQuoObjNotifyCb: Quorum resource offline/offlinepending/preoffline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [MM] MmSetQuorumOwner(0,0), old owner 0.&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.960 Physical Disk &lt;STORAGE&gt;: Stop watching disk 4bd6d2c7&lt;BR /&gt;00000af4.000004f8::2006/04/18-19:21:35.960 Physical Disk &lt;STORAGE&gt;: RemoveDisk: disk 4bd6d2c7 not found&lt;BR /&gt;00000af4.000007f8::2006/04/18-19:21:35.960 Physical Disk &lt;STORAGE&gt;: [DiskArb] StopPersistentReservations is called.&lt;BR /&gt;00000af4.000007f8::2006/04/18-19:21:35.960 Physical Disk &lt;STORAGE&gt;: [DiskArb] StopPersistentReservations is complete.&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [CP] CppResourceNotify for resource storage&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] RmTerminateResource: 5f953181-fd6d-4681-9498-bba0a72f1f7c is now offline&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate:  Locker waiting  type 0 context 11&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] Thread 0x161c UpdateLock wait on Type 0&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Locker dispatching seq 236624 type 0 context 11&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Dispatching seq 236624 type 0 context 11 to node 2&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Locker updating seq 236624 type 0 context 11&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: completed update seq 236624 type 0 context 11&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate:  Locker waiting  type 0 context 13&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] Thread 0x161c UpdateLock wait on Type 0&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] DoLockingUpdate successful, lock granted to 1&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Locker dispatching seq 236625 type 0 context 13&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Dispatching seq 236625 type 0 context 13 to node 2&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: Locker updating seq 236625 type 0 context 13&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [GUM] GumSendUpdate: completed update seq 236625 type 0 context 13&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:21:35.960 [FM] FmpCompleteMoveGroup: Take group 00fdd572-f069-410f-9210-f2405262a04c request to remote node 2&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] Thread 0x131c UpdateLock wait on Type 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] s_GumQueueLockingUpdate: dispatching seq 236626 type 0 context 17&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [FM] FmpUpdateCheckAndSetGroupOwner: Entry for Group = &amp;lt;00fdd572-f069-410f-9210-f2405262a04c&amp;gt;....&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [FM] FmpUpdateCheckAndSetGroupOwner: Exit for Group = &amp;lt;00fdd572-f069-410f-9210-f2405262a04c&amp;gt;, Status=0....&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] s_GumQueueLockingUpdate: completed update seq 236626 type 0 context 17 result 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] Thread 0x131c UpdateLock wait on Type 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] s_GumQueueLockingUpdate: dispatching seq 236627 type 0 context 13&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] s_GumQueueLockingUpdate: completed update seq 236627 type 0 context 13 result 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.976 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [GUM] Thread 0x131c UpdateLock wait on Type 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [GUM] s_GumQueueLockingUpdate: dispatching seq 236628 type 0 context 8&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [FM] Gum update resource 5f953181-fd6d-4681-9498-bba0a72f1f7c, state 129, current state 2.&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [GUM] s_GumQueueLockingUpdate: completed update seq 236628 type 0 context 8 result 0&lt;BR /&gt;0000036c.0000131c::2006/04/18-19:21:35.992 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.492 [GUM] Thread 0x1314 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.492 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.492 [GUM] s_GumQueueLockingUpdate: dispatching seq 236629 type 0 context 8&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.492 [FM] Gum update resource e3c953e9-fcb8-4617-8eeb-fcb5a8105639, state 4, current state 2.&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.492 [GUM] s_GumQueueLockingUpdate: completed update seq 236629 type 0 context 8 result 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:23:16.507 [FM] FmpCompleteMoveGroup: Exit, status = 0&lt;BR /&gt;0000036c.0000161c::2006/04/18-19:23:16.507 [FM] FmpMovePendingThread Exit.&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] Thread 0x1314 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] s_GumQueueLockingUpdate: dispatching seq 236630 type 0 context 9&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [FM] GUM update group 00fdd572-f069-410f-9210-f2405262a04c, state 3&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [FM] New owner of Group 00fdd572-f069-410f-9210-f2405262a04c is 2, state 3, curstate 0.&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] s_GumQueueLockingUpdate: completed update seq 236630 type 0 context 9 result 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] Thread 0x1314 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] s_GumQueueLockingUpdate: dispatching seq 236631 type 0 context 11&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] s_GumQueueLockingUpdate: completed update seq 236631 type 0 context 11 result 0&lt;BR /&gt;0000036c.00001314::2006/04/18-19:23:16.507 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [GUM] Thread 0x8e0 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [GUM] s_GumQueueLockingUpdate: dispatching seq 236632 type 0 context 8&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [FM] Gum update resource 5f953181-fd6d-4681-9498-bba0a72f1f7c, state 4, current state 2.&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [GUM] s_GumQueueLockingUpdate: completed update seq 236632 type 0 context 8 result 0&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.007 [GUM] GumpDoUnlockingUpdate releasing lock ownership&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.507 [GUM] Thread 0x8e0 UpdateLock wait on Type 0&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.507 [GUM] DoLockingUpdate successful, lock granted to 2&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.507 [GUM] s_GumQueueLockingUpdate: dispatching seq 236633 type 0 context 8&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.507 [FM] Gum update resource 5f953181-fd6d-4681-9498-bba0a72f1f7c, state 129, current state 2.&lt;BR /&gt;0000036c.000008e0::2006/04/18-19:24:48.507 [GUM] s_GumQueueLockingUpdate: completed update seq 236633 type 0 context 8 result 0&lt;BR /&gt;&lt;BR /&gt;What looks strange to me is the exit status 997.&lt;BR /&gt;&lt;BR /&gt;After the move group to the second node i must disable service cluster on that node and made a reboot to force the cluster to go back to the first node.&lt;BR /&gt;&lt;BR /&gt;I'm really lost and hope for help from people.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/B63398C3-997F-4FB6-BB36-0678D42FD741&gt;&lt;/NOME_IP_BACKUP&gt;&lt;/BCK_CVTPRCL01&gt;&lt;/FORMS&gt;&lt;/A7ACE53E-30C8-413F-97E3-4F50282101FD&gt;&lt;/IP-INTERNO-BACKUPS&gt;&lt;/QUOTA&gt;&lt;/EPV1.0&gt;&lt;/B43F4CB1-470F-40F8-B3D3-D53541476A62&gt;&lt;/NOME&gt;&lt;/IP&gt;&lt;/NOME&gt;&lt;/IP&gt;&lt;/CLIENT&gt;&lt;/DA2DFF8B-0938-4F78-BFF2-BD3A7776063C&gt;&lt;/ANTIGO&gt;&lt;/TST_CLIENT&gt;&lt;/DE6A2347-C6F4-453A-AF92-3B2978CBE430&gt;&lt;/PRD_CLIENT&gt;&lt;/B8B818A7-FD15-4DE2-B5FD-AD2792B4868A&gt;&lt;/CVTCLIENT&gt;&lt;/B1A4A754-4043-4315-801C-D027AE9768EC&gt;&lt;/CLIENT&gt;&lt;/FB81DBA8-9CF8-4E04-BCE2-BE8B9B4C50A4&gt;&lt;/PRINT&gt;&lt;/CVTPRPS01&gt;&lt;/D675B9F6-3222-44C3-ACD0-8DAB3D2C04CC&gt;&lt;/WINS&gt;&lt;/DADOS&gt;&lt;/B3C3E79C-BD85-4314-8A3A-214E18C4923C&gt;&lt;/HOME&gt;&lt;/CVTPRFS01&gt;&lt;/B18E5E6E-E69B-4AE0-A913-9AEE92CF5D2E&gt;&lt;/DHCP&gt;&lt;/B6E4FC7A-90D4-4494-B29F-E0693000ED29&gt;&lt;/CVTPRCL01&gt;&lt;/E3C953E9-FCB8-4617-8EEB-FCB5A8105639&gt;&lt;/CLUSTER&gt;&lt;/INICIAL&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/STORAGE&gt;&lt;/DHCP&gt;&lt;/B3C3E79C-BD85-4314-8A3A-214E18C4923C&gt;&lt;/HOME&gt;&lt;/CVTPRFS01&gt;&lt;/DHCP&gt;&lt;/B18E5E6E-E69B-4AE0-A913-9AEE92CF5D2E&gt;&lt;/DHCP&gt;&lt;/B6E4FC7A-90D4-4494-B29F-E0693000ED29&gt;&lt;/CVTPRCL01&gt;&lt;/E3C953E9-FCB8-4617-8EEB-FCB5A8105639&gt;&lt;/CLUSTER&gt;</description>
      <pubDate>Wed, 19 Apr 2006 05:42:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769764#M18683</guid>
      <dc:creator>Bruno Pina</dc:creator>
      <dc:date>2006-04-19T05:42:22Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769765#M18684</link>
      <description>Hello Bruno,&lt;BR /&gt;&lt;BR /&gt;Can you find more information in the eventlogs why the IP address fails?&lt;BR /&gt;Post this information here.&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;Matthijs</description>
      <pubDate>Sun, 23 Apr 2006 15:17:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769765#M18684</guid>
      <dc:creator>Matthijs Wijers_1</dc:creator>
      <dc:date>2006-04-23T15:17:18Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769766#M18685</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;At the eventlog i have the following error:&lt;BR /&gt;&lt;BR /&gt;Source: Foundation Agents&lt;BR /&gt;Category: Events&lt;BR /&gt;Event ID: 1168&lt;BR /&gt;Description: Cluster Agent: The cluster resource Cluster IP Address has failed. &lt;BR /&gt;[SNMP TRAP: 15006 in CPQCLUS.MIB]&lt;BR /&gt;&lt;BR /&gt;After that error the Cluster Administration Tool shows the resource from cluster located to the second cluster node, but the Cluster IP Address resource goes to failed state and the storage resource goes to the online pending state.&lt;BR /&gt;&lt;BR /&gt;Hope that information will help to understand my problem.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Mon, 24 Apr 2006 11:18:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769766#M18685</guid>
      <dc:creator>Bruno Pina</dc:creator>
      <dc:date>2006-04-24T11:18:29Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769767#M18686</link>
      <description>That insight manager cluster agent message is only informational, this is logged after the error has already occured.&lt;BR /&gt;What happens before that is important, any eventlog entries that show why the ip-address fails.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Matthijs</description>
      <pubDate>Mon, 24 Apr 2006 14:55:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769767#M18686</guid>
      <dc:creator>Matthijs Wijers_1</dc:creator>
      <dc:date>2006-04-24T14:55:21Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769768#M18687</link>
      <description>Looks like the dhcp resource b18e5e6e-e69b-4ae0-a913-9aee92cf5d2e is (part of) the culprit.&lt;BR /&gt;Can you verify your dhcp resource configuration/dependencies?&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://support.microsoft.com/kb/226796/EN-US/" target="_blank"&gt;http://support.microsoft.com/kb/226796/EN-US/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Q226796 Using WINS and DHCP with the Windows 2000 Cluster Service.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Matthijs</description>
      <pubDate>Mon, 24 Apr 2006 15:25:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769768#M18687</guid>
      <dc:creator>Matthijs Wijers_1</dc:creator>
      <dc:date>2006-04-24T15:25:51Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769769#M18688</link>
      <description>Hi Matthijjs,&lt;BR /&gt;&lt;BR /&gt;I tried to make move group again and at the second node i get the following error:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Source: ClusSVC&lt;BR /&gt;Event ID: 1069&lt;BR /&gt;Description: Cluster resource "storage" failed.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;There are no more messages detailing the problem about cluster error.&lt;BR /&gt;&lt;BR /&gt;After the Cluster IP Address resource fail i went to the second cluster node and try to open the Cluster Administration tool and i get the error:&lt;BR /&gt;&lt;BR /&gt;"The cluster service on node "CVTPR01" cannot be started. Error Id: 1722"&lt;BR /&gt;&lt;BR /&gt;I checked the WINS and DHCP resource configuration and everything looks ok.&lt;BR /&gt;&lt;BR /&gt;The errors above means something to you?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Tue, 25 Apr 2006 11:03:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769769#M18688</guid>
      <dc:creator>Bruno Pina</dc:creator>
      <dc:date>2006-04-25T11:03:52Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769770#M18689</link>
      <description>You should be connecting to the clustername, what are the network names and netbios names of the nodes?&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;Matthijs</description>
      <pubDate>Mon, 01 May 2006 10:52:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769770#M18689</guid>
      <dc:creator>Matthijs Wijers_1</dc:creator>
      <dc:date>2006-05-01T10:52:35Z</dc:date>
    </item>
    <item>
      <title>Re: MS Cluster problem</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769771#M18690</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I would like to thank everybody trying to help.&lt;BR /&gt;&lt;BR /&gt;I already fix the problem.&lt;BR /&gt;&lt;BR /&gt;I was checking all the cluster configuration and checked that i had one device not working properly. I re-installed the driver and now everything is fine.&lt;BR /&gt;&lt;BR /&gt;Thanks once again,&lt;BR /&gt;&lt;BR /&gt;Bruno.</description>
      <pubDate>Tue, 02 May 2006 04:55:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/ms-cluster-problem/m-p/3769771#M18690</guid>
      <dc:creator>Bruno Pina</dc:creator>
      <dc:date>2006-05-02T04:55:49Z</dc:date>
    </item>
  </channel>
</rss>

