<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic vnic's for in guest iscsi in Array Setup and Networking</title>
    <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982892#M587</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have VM's on VMware that I am setting up in-guest iSCSI.&amp;nbsp; These guest iSCSI NIC's are riding the same real NIC's on the VMware Host that is used for iSCSI from VMware to the Nimble. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I give each VM two vnic's for iSCSI so I can use MPIO.&amp;nbsp; My question is does Nimble have a recommendation for the vnic settings in the Windows VM?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I turned ON Jumbo frames since my physical network from the VMware host is using Jumbo frames.&amp;nbsp; What about all of the &lt;STRONG&gt;TCP Off Load&lt;/STRONG&gt; options in the vnic's?&amp;nbsp; Or &lt;STRONG&gt;RSS&lt;/STRONG&gt;?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;From my previous Windows/Hyper V experience everyone and their brother recommended that TCP Off load options be turned OFF as it could cause performance issues with iSCSI.&amp;nbsp; That was for a physical Windows host and in the case of Hyper V VM's a "good idea" from most blog's I read back when we had Hyper V&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for any input!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 24 Feb 2016 18:39:05 GMT</pubDate>
    <dc:creator>lindy37</dc:creator>
    <dc:date>2016-02-24T18:39:05Z</dc:date>
    <item>
      <title>vnic's for in guest iscsi</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982892#M587</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have VM's on VMware that I am setting up in-guest iSCSI.&amp;nbsp; These guest iSCSI NIC's are riding the same real NIC's on the VMware Host that is used for iSCSI from VMware to the Nimble. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I give each VM two vnic's for iSCSI so I can use MPIO.&amp;nbsp; My question is does Nimble have a recommendation for the vnic settings in the Windows VM?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I turned ON Jumbo frames since my physical network from the VMware host is using Jumbo frames.&amp;nbsp; What about all of the &lt;STRONG&gt;TCP Off Load&lt;/STRONG&gt; options in the vnic's?&amp;nbsp; Or &lt;STRONG&gt;RSS&lt;/STRONG&gt;?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;From my previous Windows/Hyper V experience everyone and their brother recommended that TCP Off load options be turned OFF as it could cause performance issues with iSCSI.&amp;nbsp; That was for a physical Windows host and in the case of Hyper V VM's a "good idea" from most blog's I read back when we had Hyper V&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for any input!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 24 Feb 2016 18:39:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982892#M587</guid>
      <dc:creator>lindy37</dc:creator>
      <dc:date>2016-02-24T18:39:05Z</dc:date>
    </item>
    <item>
      <title>Re: vnic's for in guest iscsi</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982893#M588</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We've had good luck with in-guest iSCSI utilizing tcp off load with the VMware pv nic.&amp;nbsp; In tests with various settings we found the best performance leaving tcp offloads turned on and RSS on.&amp;nbsp; I've heard the same myths you've heard.&amp;nbsp; Everyone's first troubleshooting step seems to be to&amp;nbsp; to turn off TCP off loads though I haven't personally ever run into a case where disabling TCP off loads has actually solved a problem.&amp;nbsp; Perhaps someone else out there can share some experience on that.&amp;nbsp; As an aside, I usually don't recommend using in-guest iSCSI unless you have a really good reason to do so.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Feb 2016 15:25:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982893#M588</guid>
      <dc:creator>jzygmunt70</dc:creator>
      <dc:date>2016-02-25T15:25:41Z</dc:date>
    </item>
    <item>
      <title>Re: vnic's for in guest iscsi</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982894#M589</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks Jonathan!&amp;nbsp; I called support yesterday and they said to leave the defaults but to change the Jumbo frames setting if my physical NIC's on the VMware host are set to that.&amp;nbsp; So I only changed the Jumbo frames setting.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I migrated 1TB of Exchange 2010 data from VMDK's on a Equallogic to these new volumes on the Nimble, presented via in guest iSCSI with no issues last night.&amp;nbsp; A snap with verify ran great this morning peaking at 16k read IOPS with 95% cache hits and less than 1ms latency.&amp;nbsp; I think the vnic settings are good to go &lt;IMG src="https://community.hpe.com/legacyfs/online/emoticons/happy.png" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am only using in guest iSCSI for SQL and Exchange servers that I want application consistent snap shots of.&amp;nbsp; For Exchange it is the only way to clear the log files with a snapshot backup.&amp;nbsp; I am not a huge fan of VMware snapshots, although version 6 supposedly greatly improved them.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks again! &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Feb 2016 15:44:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982894#M589</guid>
      <dc:creator>lindy37</dc:creator>
      <dc:date>2016-02-25T15:44:14Z</dc:date>
    </item>
    <item>
      <title>Re: vnic's for in guest iscsi</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982895#M590</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We use hypervisor based iscsi for Exchange and SQL.&amp;nbsp; We're using tools like Veeam which execute the snapshots and our backups are app consistent.&amp;nbsp; One of the uber Nimble experts can chime in here, but as I understand it, the Nimble will call VMware when it does a snapshot which in turn calls the VMware tools on the guest....in turn calling VSS which tells exchange / sql to quiesce and then gets an app consistent snap.&amp;nbsp; So I think you can achieve what you want without resorting to guest basted iscsi.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Feb 2016 21:02:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982895#M590</guid>
      <dc:creator>jzygmunt70</dc:creator>
      <dc:date>2016-02-25T21:02:00Z</dc:date>
    </item>
    <item>
      <title>Re: vnic's for in guest iscsi</title>
      <link>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982896#M591</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;"&lt;SPAN style="color: #3d3d3d; font-family: 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 13px;"&gt;Nimble will call VMware when it does a snapshot which in turn calls the VMware tools on the guest....in turn calling VSS which tells exchange / sql to quiesce and then gets an app consistent snap"&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You said it well and that is why I dont want to do that, to many pieces/steps and I have had VMware snapshots that did not revert in the past.&amp;nbsp; I have read that the VMware snapshots in version 6 is the same tech they recently moved to for vmotion making them better, but still it is too many steps for me.&amp;nbsp; &lt;A href="http://cormachogan.com/2016/01/06/snapshot-consolidation-changes-in-vsphere-6-0/" title="http://cormachogan.com/2016/01/06/snapshot-consolidation-changes-in-vsphere-6-0/"&gt;Snapshot Consolidation changes in vSphere 6.0 - CormacHogan.com&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;With in-guest iSCSI&amp;nbsp; -&amp;nbsp;&amp;nbsp; Nimble calls the NCM client in the guest which triggers VSS in the guest and takes the application consistent snap.&amp;nbsp; VMware has no idea it is going on and is not an additional layer to deal with.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;According to this (page 8/9)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="http://uploads.nimblestorage.com/wp-content/uploads/2015/05/22100749/bpg_nimble_storage_vmware_vsphere5.pdf" title="http://uploads.nimblestorage.com/wp-content/uploads/2015/05/22100749/bpg_nimble_storage_vmware_vsphere5.pdf"&gt;http://uploads.nimblestorage.com/wp-content/uploads/2015/05/22100749/bpg_nimble_storage_vmware_vsphere5.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Snapping from the VMware layer down will not clear the logs on Exchange.&amp;nbsp; Prior to Nimble we uses Backup Exec 2014 which could either do in guest backups or from the VMware level with VMware snapshots.&amp;nbsp; If all goes well we will be retiring Backup Exec and using some thirds party tools (UFS Explorer and Lepide) to do restore's from Snap Shot backups.&amp;nbsp; We have a second Nimble that we replicate to as well.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Feb 2016 21:58:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/array-setup-and-networking/vnic-s-for-in-guest-iscsi/m-p/6982896#M591</guid>
      <dc:creator>lindy37</dc:creator>
      <dc:date>2016-02-25T21:58:35Z</dc:date>
    </item>
  </channel>
</rss>

