<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VMware HA Settings in HPE SimpliVity</title>
    <link>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091845#M1572</link>
    <description>&lt;P&gt;The cluster is at 3.7.10. We have considered an upgrade to the latest version, 4.0.1 but, are one, concerned that during the upgrade the fauly node could fail again and two, no one at HPE can confirm that this issue has been addressed in a newer version.&lt;/P&gt;&lt;P&gt;When the boot drives fail, ESXi continues to run from memory under this condition. Through vCenter this host shows APD on the boot drive.&lt;/P&gt;&lt;P&gt;After the boot drive has failed, the OVC responds to pings and you can SSH into it. However, once logged into the OVC, is does not respond to commands.&lt;/P&gt;&lt;P&gt;So, what you are saying is that if an OVC is still able to respond on its IP address, the OVC and its node are considered healthy?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 16 Jun 2020 16:07:32 GMT</pubDate>
    <dc:creator>tonymcmillan</dc:creator>
    <dc:date>2020-06-16T16:07:32Z</dc:date>
    <item>
      <title>VMware HA Settings</title>
      <link>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091649#M1566</link>
      <description>&lt;P&gt;Are there any best practices documented for HA settings in a SimpliVity cluster?&lt;/P&gt;&lt;P&gt;Specifically, we are focused on initiating an HA event when the logical array (boot drives) that support the OVC fails. This has happenened multiple times at a client site and each time, it leaves the VMs stranded on the "failed" host until we power off the node and let VMware HA kick in and restart them on the good host.&lt;/P&gt;&lt;P&gt;Under the vSphere Availability service, there are settings for "Failure conditions and responses". These allow you to&amp;nbsp;specify how VMware handles a datastore failure. By default, these settings (PDL and APD) are disabled.&lt;/P&gt;&lt;P&gt;What are the best settings for this in a SimpliVity environment?&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Tony&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 15 Jun 2020 22:46:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091649#M1566</guid>
      <dc:creator>tonymcmillan</dc:creator>
      <dc:date>2020-06-15T22:46:39Z</dc:date>
    </item>
    <item>
      <title>Re: VMware HA Settings</title>
      <link>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091678#M1568</link>
      <description>&lt;P&gt;There is no recommendation around the setting for APD it may be configured in a Simplivity enviornment.&lt;/P&gt;&lt;P&gt;In your senario I am not convinced it will work as the OVC on the node with the boot drives faulty will still be reacting to ARP they will still be seen as alive by the other nodes in the cluster and there may be ownership transition issues.The ESXI may not be that healthy either and fail to trigger the failover or notice it has APD.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regarding lost access to a boot drive causing the symptoms I have seen something similar before primarily on Dell based hardware (R730) I would update the FW and ESXI version&amp;nbsp; to the latest supported on the Simplivity interop guide If its still happening open a support ticket later versions have a way to mitigate this exact senario.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jun 2020 07:30:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091678#M1568</guid>
      <dc:creator>DaveOb</dc:creator>
      <dc:date>2020-06-16T07:30:15Z</dc:date>
    </item>
    <item>
      <title>Re: VMware HA Settings</title>
      <link>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091845#M1572</link>
      <description>&lt;P&gt;The cluster is at 3.7.10. We have considered an upgrade to the latest version, 4.0.1 but, are one, concerned that during the upgrade the fauly node could fail again and two, no one at HPE can confirm that this issue has been addressed in a newer version.&lt;/P&gt;&lt;P&gt;When the boot drives fail, ESXi continues to run from memory under this condition. Through vCenter this host shows APD on the boot drive.&lt;/P&gt;&lt;P&gt;After the boot drive has failed, the OVC responds to pings and you can SSH into it. However, once logged into the OVC, is does not respond to commands.&lt;/P&gt;&lt;P&gt;So, what you are saying is that if an OVC is still able to respond on its IP address, the OVC and its node are considered healthy?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jun 2020 16:07:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091845#M1572</guid>
      <dc:creator>tonymcmillan</dc:creator>
      <dc:date>2020-06-16T16:07:32Z</dc:date>
    </item>
    <item>
      <title>Re: VMware HA Settings</title>
      <link>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091968#M1574</link>
      <description>&lt;P&gt;Hello Tony,&lt;/P&gt;&lt;P&gt;Can I have the case number? Please send me a private message.&lt;/P&gt;&lt;P&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&lt;/P&gt;&lt;P&gt;If OVC does not respond to commands after startup, you can check whether there is a "nostart" file.&lt;/P&gt;&lt;P&gt;Example:&lt;/P&gt;&lt;P&gt;root@omnicube-ip166-104:/home/administrator@vsphere# &lt;FONT color="#FF00FF"&gt;ls /var/svtfs/0/ | grep nostart&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF0000"&gt;nostart&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;root@omnicube-ip166-104:/home/administrator@vsphere# &lt;FONT color="#FF00FF"&gt;rm /var/svtfs/0/nostart&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;root@omnicube-ip166-104:/home/administrator@vsphere# &lt;FONT color="#FF00FF"&gt;start svtfs&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;svtfs (0) start/running, process 4621&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After that, run the svt-federation-show to check.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;root@omnicube-ip166-104:/home/administrator@vsphere# svt-federation-show&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 02:48:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-simplivity/vmware-ha-settings/m-p/7091968#M1574</guid>
      <dc:creator>AlexLeung</dc:creator>
      <dc:date>2020-06-17T02:48:03Z</dc:date>
    </item>
  </channel>
</rss>

