<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MSA2012i ISCSI in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403235#M30898</link>
    <description>That's helpful thanks. Yes it's dual controller.&lt;BR /&gt;&lt;BR /&gt;So Multiple volumes to be spread between controllers makes sense. Is this controller assignment done at the vraid level or at volume within vraid disk setup?&lt;BR /&gt;&lt;BR /&gt;IO queues are important. On the basis that someone might execute a big table scan on an SQL volume I wouldn't want the IO queue affecting everything else. In this scenario I guess I'd have to have the VMFS containing the SQL data in it's own volume with an IO queue, and other data in it's own volume. Am I correct?&lt;BR /&gt;&lt;BR /&gt;Also I keep reading that ESX only supports the ISCSI for msa2012i at software level. Has that improved recently?</description>
    <pubDate>Fri, 17 Apr 2009 16:29:37 GMT</pubDate>
    <dc:creator>motech2</dc:creator>
    <dc:date>2009-04-17T16:29:37Z</dc:date>
    <item>
      <title>MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403233#M30896</link>
      <description>Hi, I am looking at configuring some luns on a new MSA2012i. It will be used for ISCSI luns holding VMWare servers running from ESX hosts.&lt;BR /&gt;&lt;BR /&gt;My first question is whether, given that I want to have multiple virtual servers hosted on the MSA, accessible from multiple ESX servers (clustering / failover etc) should create one huge LUN that I could hold multiple VMs on and make accessible to all my ESX servers or whether I should carve the storage up - what's the pro's and con's?&lt;BR /&gt;&lt;BR /&gt;My second question is really performance wise  - what raid level should I use? I will have a range of things being hosted - from very fast IO SQL to low IO boxes just serving up some files. They're 15k 300GB disks and I can have lots of them if I need them.&lt;BR /&gt;&lt;BR /&gt;Any thoughts much appreciated.&lt;BR /&gt;</description>
      <pubDate>Fri, 17 Apr 2009 14:52:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403233#M30896</guid>
      <dc:creator>motech2</dc:creator>
      <dc:date>2009-04-17T14:52:54Z</dc:date>
    </item>
    <item>
      <title>Re: MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403234#M30897</link>
      <description>Is this a 2-controller MSA2012i? If yes, then I would create at least 2 MSA volumes which can be split across both controllers. Well, I would do that anyway in case you upgrade later.&lt;BR /&gt;&lt;BR /&gt;Unfortunately, here is no single correct value for a VMFS volumes. I have customers with as small as 150GB (many small VMs) and others with up to 800, 900 or 1000GB (database servers) volumes.&lt;BR /&gt;&lt;BR /&gt;Remember that there is one I/O queue to each VMFS volume and more volumes = more I/O queues. On the other hand: many volumes = much scattered free space. The choice is your's ;-)&lt;BR /&gt;&lt;BR /&gt;Even if you create multiple volumes, I suggest to present them to all ESX servers. Unlike file systems like NTFS or EXT2/3, the VMFS can deal with multiple readers/writers without a problem.&lt;BR /&gt;That enables you to use other ESX features like VMotion(and DRS), HA and Storage VMotion.</description>
      <pubDate>Fri, 17 Apr 2009 16:22:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403234#M30897</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2009-04-17T16:22:26Z</dc:date>
    </item>
    <item>
      <title>Re: MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403235#M30898</link>
      <description>That's helpful thanks. Yes it's dual controller.&lt;BR /&gt;&lt;BR /&gt;So Multiple volumes to be spread between controllers makes sense. Is this controller assignment done at the vraid level or at volume within vraid disk setup?&lt;BR /&gt;&lt;BR /&gt;IO queues are important. On the basis that someone might execute a big table scan on an SQL volume I wouldn't want the IO queue affecting everything else. In this scenario I guess I'd have to have the VMFS containing the SQL data in it's own volume with an IO queue, and other data in it's own volume. Am I correct?&lt;BR /&gt;&lt;BR /&gt;Also I keep reading that ESX only supports the ISCSI for msa2012i at software level. Has that improved recently?</description>
      <pubDate>Fri, 17 Apr 2009 16:29:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403235#M30898</guid>
      <dc:creator>motech2</dc:creator>
      <dc:date>2009-04-17T16:29:37Z</dc:date>
    </item>
    <item>
      <title>Re: MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403236#M30899</link>
      <description>The controller ownership is at the 'virtual disk' (vdisk) level. A vdisk is a RAID container which provides space for one or more 'volumes'. A 'volume' is then presented to one or more hosts (servers) and provides a disk space.&lt;BR /&gt;&lt;BR /&gt;If you want to hard-partition your I/O, yes, create a separate vdisk with a volume onto which you create a single VMFS datastore which is used for one virtual machine. That's an extreme, but if you absolutely need, it is possible.&lt;BR /&gt;&lt;BR /&gt;I have not checked, but you could also install an iSCSI sw initiator into the Linux/Windows guest to offload the VMkernel iSCSI initiator. I am not very familiar with the latest features of the Linux version, but the Microsoft one has been offering features like path failover and even load balancing for years (in 2007 I ran some tests and was able to read up to 180 MBytes/sec across two Gigabit links from another vendor's array into a ESX servers' Windows VM).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Last year I've set up a small ESX cluster with an MSA2012i. During setup I ran some tests from a Windows 2003 VM via the VMkernel software iSCSI initiator against a single 3 or 4 member RAID-5 vdisk/volume on 300GB/15kRPM SAS disk drives. With IOMETER and some unrealistic 'benchmark'-parameters I was able to get 100+ MBytes/sec reads and 50-70 MBytes/sec writes.</description>
      <pubDate>Fri, 17 Apr 2009 16:51:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403236#M30899</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2009-04-17T16:51:07Z</dc:date>
    </item>
    <item>
      <title>Re: MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403237#M30900</link>
      <description>Again that is really useful - thanks for your help.&lt;BR /&gt;&lt;BR /&gt;One last thing - in the setup it talks about having each IP address in a different subnet. Why is this? Is there some sort of arping issue when controller failover happens? Different subnets seems a bit extreme.</description>
      <pubDate>Fri, 17 Apr 2009 17:05:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403237#M30900</guid>
      <dc:creator>motech2</dc:creator>
      <dc:date>2009-04-17T17:05:00Z</dc:date>
    </item>
    <item>
      <title>Re: MSA2012i ISCSI</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403238#M30901</link>
      <description>Dont use  ISCSI for VMs. It is initiated at the software level and can be slow. We are switching out our 2 node 6 VM per server with a MSA2312fc.</description>
      <pubDate>Sat, 18 Apr 2009 03:34:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa2012i-iscsi/m-p/4403238#M30901</guid>
      <dc:creator>WilliamReed</dc:creator>
      <dc:date>2009-04-18T03:34:36Z</dc:date>
    </item>
  </channel>
</rss>

