- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Application Integration
- >
- Oracle Databases
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2014 07:03 AM
тАО03-12-2014 07:03 AM
Since there is a hard limit of 256 volumes on a single array and we have 50+ SQL servers , on Exchange system with 27 RDM disks on NetAPP (27 iin Guest iSCSI on Nimble) , an OVM environment and then the general server fleet ...... Begs the following Question. We are considering having Dedicated Datastores 3-4 for SQL User DB's, the same number for SQL Logs and the same for TEMP Log and DB's. This equates to a toatal of 12 volumes with no more than 15 VM's per datastore. Ideally I'd like to use in guest iSCSI but breaking out the database into 3 logical drives will consume 150 Volumes so it's ruled out. If I use a similar model for oracle using VM Datastores how woul I configure the volumes i.e. Block sizes; to cache or not, compression etc. We use the OLTP model and will be looking at ASM in the future. Our current VM environment currently on NetApp has in excess of 250 VM's. Any steer on what to do here and the pro's and cons?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2014 02:26 AM
тАО03-24-2014 02:26 AM
SolutionMark,
Firstly apologies for the delay in responding - my first opportunity to catch up!
I think there is a couple of ways to approach this, I also think it's a generic approach that works for all datasets not just OLTP as per you example above.
One approach is to do what you state above and have a volumes for each application (Data and Logs); The pro's of this approach is you get a lot of granularity to treat each application independently with regards to it's backup and recovery (based on snapshot schedule), it's replication (if the application is replicated and 'tuning' the volumes with regards to their cache, compression and blocksize by assigning appropriate Peformance Policy. From what I have read this method of provisioning is neatly aligned to VMWare vVols so it should be neatly aligned to that technology as and when it's released. The downside of this approach is you end up with a lot objects to manage which is naturally more overhead but you could start to push the current limits of the array with regards to total amount of volumes/snapshots (we do expect these numbers to increase with future releases of Nimble OS); It also worth noting that if these volumes are provisioned directly into the Guest then tools like SRM will be unaware and unable to manage them (so this should be considered as part of the decision)
The second approach is to think of volumes in the Terms of Service levels; so that volumes/datastores become containers and volumes are provisioned with services or types of I/O that we are likely to see on that volume. Therefore I have a fewer number of generic datastores, that are aligned to service operation:
Premium-SQL-LOGS - Replicated, Snapshotted, Compression ON, Caching OFF
Premium-SQL-DATA - Replicated, Snapshotted, Compression ON, Caching ON
Base-SQL-LOGS - Compression ON, Caching OFF
Base-SQL-DATA - Compression ON, Caching ON
Pro's of this approach is you end up with much fewer objects to manage and have containers that are aligned to the service levels they provide.
Con's is you need to think about restore scenarios more carefully as recovering from a snapshot is not as simple as a volume restore (as multiple applications share the same volume - in this case you'd need to clone and copy the individual files)
Hope this helps - there is no real right or wrong method here
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2014 02:37 AM
тАО03-27-2014 02:37 AM
Re: Oracle Databases
I completely forgot about this rather excellent blog post that discusses this exact topic, it's SQL Server specific but I think the same methodology applies equally well - please take a look:
http://blog.cosiemodo.com/2013/11/05/sql-server-on-nimble-storage--storage-layout.aspx
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-27-2014 10:06 PM
тАО03-27-2014 10:06 PM
Re: Oracle Databases
Cheers Richard. Can confirm after speaking to support for a sanity check that if we go down the VMware Datastore route then it will be 4K block size with either caching on or off. Log volume will be off. Reason for the 4k block size is that the datastores are formatted as VMFS naturally so any deviation for this can lead to problems like block misalignment.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2014 12:00 AM
тАО03-28-2014 12:00 AM
Re: Oracle Databases
Some contradiction here. Found a previous post as follows:-
"Sammy Bogaert replied on Oct 17, 2013
Keep in mind that almost all of your IO on a VMFS Datastore will come from inside the Virtual Machine (which uses the blocksize from the OS/app inside the VM).
So the blocksize of a VMFS volume (which is now 1 MB with indeed sub-blocks) is important for all the files on a VMFS datastore, like the vmx file, logfiles, but doesn't really matter much for IOs in your VM.
Good example. I have a VM which has Exchange installed. All disks in the VMs are stored as VMDK files on a VMFS-5 datastore. The C:\ drive/VMDK is on a datastore set with a Performance Policy of 'ESX v5' on the Nimble array (= 4KB). The Exchange Database is on a seperate VMDK in a dedicated VMFS datastore for Exchange. This has a Performance Policy of "Exchange 2010" on Nimble (= 32 KB). This setup was validated by Nimble support as well.
The reason i believe for ESX having a 4 KB block size on Nimble is because of the random/mixed nature of the VMs on those datastores. Most of the IOPS are 4 KB and if you have specifics, you should try to seperate them."
If the above is indeed a valid configuration then the same methodology can be applied to SQL on VMFS Datastores? Digging through VMware posts and the post on this medium implies that the main reason for the default 4K block size for ESX Performance profile is largely down to the typical work load occurring on Operating system volumes and it's random IO pattern. I'm not looking to to be Devils advocate but would like some clarity. I also appreciate the "it Depends" comments. I like the guest connected volumes and love the array app consistent snap this gives without having to rely on the Vcenter snap synchronisation that is not as efficient in terms of speed.
Love some input here because I want to get it right before the big migration from NetApp commences..bear in mind that we may look at Zerto or SRM in our environment so we're kind of stuck with VMDK's and RDM's to avoid any complex scripting.
Kind Regards in advance
Mark.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2014 09:19 AM
тАО03-28-2014 09:19 AM
Re: Oracle Databases
Personally I'd go with whatever Support recommend... always !
Compression and Caching settings aside (as that happens regardless based on the policy used) the two attributes that the block size will impact are block alignment from the hosts and application data that is co-hosted on the datastore and snapshot sizes (how much data is retained when a byte/block is changed).
There's a great blog from Wen Yu that talks to blocks at each level of the stack that dispels some of the myths around differing block sizes:
Myth Buster: 3 Top Myths in block size | super sonic dog (S.S.D)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2014 04:20 PM
тАО08-04-2014 04:20 PM
Re: Oracle Databases
Commvault and Nimble is working well together but is really complicated because of Commvaults amazing granularity. Tested Unitrends and Arkeia but they didn't perform well in our setup. The support from all of these third party vendors is blown away by Nimble Support too!