- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- Re: This Thing Ain't a VNX!
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-03-2015 10:14 AM
тАО02-03-2015 10:14 AM
I thought the subject line might pique some interest!
Greetings from a new to Nimble dude. We just bought 3 CS-300's and 1 CS-500 arrays. We are replacing a bunch of Equallogic arrays in outer offices and moving a bunch of workloads off of a maxed out VNX 5300 in our main DC. We are running completely ESXi 5.5 workloads on the Nimble through meshed/stacked unrouted private 1GE at some sites and 10GE at others. So far the array provisioning and vmware integration has gone easy as pie (unlike anything EMC makes except maybe Isilon and that isn't EMC IMHO). I will say that the vsphere plug-in is so far problematic as too many plug-ins are.
I'm trying to wrap my brain around the move to eager-zero-thick provisioning in vmware vs our traditional thin provisioning for most VM's. I currently make use of datastore clusters housing VNX LUN's that make use of various disk/RAID types depending on the type of workload we're running. We have SDRS enabled as a fail safe for the over-provisioning that happens in thin provisioned environments.
Now we have these Nimbles that thin provision by default at the volume level in addition to compressing inline. So now I can expect that if I thick provision in VMware, the LUN cannot be overprovisioned at that level yet the reality is the LUN is only using 50-60% of that space on the array. Do I care that the number is misreprented in vmware or just provision more LUN's?
Thanks for any insight in getting on board with the Nimble way.
Ron
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-03-2015 11:14 AM
тАО02-03-2015 11:14 AM
Re: This thing aint a VNX!
You'll find that when you initially create a thick provisioned disk - they compress quite well
So yes, you'll find a bit of a mismatch in used space when comparing the Datastore to the Volume. I think this is going to come up with any storage array that has some sort of compression or deduplication feature. Obviously you'll want to keep an eye on the Datastore use since vSphere will alert/react once you start hitting certain thresholds.
Are you planning to keep tiering a part of the storage strategy or is the Nimble going to eliminate that for you?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-03-2015 11:39 AM
тАО02-03-2015 11:39 AM
Re: This thing aint a VNX!
I'm hoping that the flash magic will eliminate all the tiering stuff.
One new area of concern is Storage DRS and how thin provisioned volumes don't reclaim used space. It is looking like I need to eliminate datastore clusters and SDRS unless I can automate SCSI_UNMAP against all my LUN's on a regular basis.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-03-2015 11:44 AM
тАО02-03-2015 11:44 AM
Re: This thing aint a VNX!
Do you think there will be quite a bit of Storage migrations happening in the environment once everything settles into its nice new Nimble datastore? If not, then you probably won't have to worry about this too much. I was really hoping to see this addressed with the announcement of vSphere 6 but I haven't found any information relating to space reclamation. Maybe with VVOLs there will be some better communication between vSphere and supported storage arrays?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-03-2015 12:32 PM
тАО02-03-2015 12:32 PM
Re: This thing aint a VNX!
In my typical datastore clusters vcenter moves VMDK's around based on thresholds I set which in turn keep LUN's in the cluster fairly evenly balanced in terms of consumption. So SDRS does start to kick in for space thresholds when datastores start to get full.
I think with Nimble I need to retreat from datastore clustering at least until some form of automated UNMAP feature is part of the stack. I'm regretting spending the extra money on vsphere Ent+ again (buggy vflash was my first regret)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-09-2015 10:18 AM
тАО02-09-2015 10:18 AM
SolutionRon;
First of all, welcome to the Nimble family, and thank you for your support. I am confident you will be pleased with the migration off of your legacy arrays !
The suggestion in our VMware Best Practice guide (which can be found on Infosight in the Downloads section, under Best Practices) is to match your volume provisioning in VMware and on the Nimble Array:
VMDK Format Space Dedicated Zeroed Out Blocks Nimble Provisioning
Thin As Needed As Needed Default (Thin)
Zeroed Thick At Creation As Needed Use Volume Reservation
Eager Zeroed Thick At Creation At Creation Use Volume Reservation
So, if you want to use anything other than Thin, use the Volume Properties to set a Volume Reservation to match the volume size.
The BPG also adds this tip: For best performance, use eager zeroed thick VMDK format as the zero blocks are compressed on the array side, and do not take up additional disk space.
However, I need to also ask, what are these volumes being used for ? For application data volumes (such as SQL or Exchange), you might want to consider using Direct iSCSI Guest attached volumes, where this become irrelevant. There are other considerations here, of course, but this allows you to better match the Nimble performance policy with the application data volume (for example, SQL, or Exchange).
John Haines
Nimble SE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-09-2015 10:41 AM
тАО02-09-2015 10:41 AM
Re: This thing aint a VNX!
Hey John,
Thanks for the reply. We have a mixed workload typical of virtualized environments. We do have 4 to 5T of SQL storage as vmdk's (separated by OS, Data, log, thick provisioned mostly) spread across 20+ SQL servers. We have Exchange 2010 which is friendly with lower speed spindles I'm told. We've had decent low latency (1-2ms outside backup windows) performance on the VNX with Fast Cache fronting RAID10 10K SAS for SQL. We've been trying to rid ourselves of direct attached iSCSI and really don't want to go back that scenario especially with these smoking fast arrays.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2015 12:52 AM
тАО02-10-2015 12:52 AM
Re: This Thing Ain't a VNX!
Hi Ron,
Understand where you're coming from with the Direct Attached iSCSI, however it does mean that you'll lose the additional Exchange/SQL integrated snapshot awareness if you're to use VMFS. You could use VMware's RDM method of storage presentation but even reps within VMware will tell you this is not the best way to go. If you are stuck on using VMFS ensure that you create dedicated VMFS volumes for your Exchange datastores, Exchange Logs, SQL databases etc - that way you can attach the Exchange/SQL Performance Policy with the associated block size rather than the generic "VMware ESX 5". This will ensure a bit more optimisation in the SSD caching functionality and block sizes.
...of course, all of this discussion disappears with VMware's Virtual Volumes implementation which drops this year with vSphere 6.
twitter: @nick_dyer_
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2015 09:45 AM
тАО02-10-2015 09:45 AM
Re: This Thing Ain't a VNX!
All I can say is bring on vvols!