- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: Sybase on Linux on VMware Performance Question...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2014 10:33 AM
09-24-2014 10:33 AM
Hello all,
I am a recent LeftHand SAN convert and so far I am super happy with the move to Nimble. I have a situation that I was hoping I could get some guidance on. We are currently running Sybase 15.7 on RHEL 6.1 on top of VMware ESXi 4.1, in the past with LeftHand the best performance was when I presented the volumes to the Sybase server as RDMs and let VMware deal with the iSCSI connections. However I have read the I will get better performance out of the Nimble SAN if I mount the volumes through iSCSI directly inside the Sybase server. So my first question is to see if the Nimble world agrees that that is the best way to handle the mounting of the volumes?
Second, my Sybase server is setup to use 2KB block size and I notice that the performance policy will only let me go down to 4KB. I have tried using the ESX performance policy(4KB) and found that it seems to work ok but the transfer rates seem to be real inconsistent and don't come close to maxing out the source SANs abilities. I have also tried using the Oracle OLTP profile and found that to be painfully slow in the virtual machines, however in my DEV environment I have a couple of physical Sybase servers running the same OS and the same version of Sybase and when I use the Oracle profile my data transfers are rock solid and max out the source SANs abilities. So now my second question is in the virtual environment what would the best solution be to get the max performance out of the volumes?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-30-2014 05:05 AM
09-30-2014 05:05 AM
SolutionHi Michael,
Great questions. My assumption is that you are already splitting DB and Logs with separate RDMs today, but for completeness I have covered other benefits of this for other readers.
Question #1 - iSCSI to the guest or iSCSI to VMWare
iSCSI Stack - If you are still running ESXi 4.1 then you will almost certainly get better performance by passing the iSCSI VLAN through to RHEL and mounting the devices there. Later versions of ESXi have a vastly improved iSCSI stack that performs very well, but in most cases the best performance comes from direct iSCSI connections.
Cache worthy data vs. not cache worthy data - Make sure that the Sybase database and logs are placed in different volumes (cache disabled performance policy for Sybase logs). This makes sure we aren't wasting the Cache Accelerated read capacity on low value data.
Data protection - By splitting out the volumes via iSCSI to the guest, snapshots can be created independently of VMWare and can be cloned and mounted easily for recovery too.
Answer to Question #1 - iSCSI to the Guest generally offers the best performance and opportunity for tuning, while also providing excellent data protection options.
Answer to Question #2
The second part needs a little more detail, since you have virtual machines that aren't performing as well as some of your physical servers. This isn't uncommon given dedicated resources, but there are some things to look at to tighten the screws a little.
Again - if you are still running ESXi 4.1 iSCSI, I wouldn't have high expectations of blinding performance. Go with the iSCSI to the Guest option, and run some performance tests there first. Your physical hosts won't be using the ESX iSCSI stack, so won't be subject to its limitations in ESXi 4.1.
If you are going to stick with Native ESX iSCSI, there are some tuning parameters you can set to ensure that MPIO is utilising all paths. You can give Nimble support a call on that one... or you can run this set of commands via SSH to the ESXi host.
# ONLY FOR ESXi 4.1 # Set multipathing policy to round robin for all Nimble storage and set IOPS per path to 1.
for i in `esxcli nmp device list | awk '/Nimble iSCSI Disk/{print $7}' | sed -e 's/(//' -e 's/)//'`; do esxcli nmp device setpolicy --device $i --psp VMW_PSP_RR; esxcli nmp roundrobin setconfig --device=$i --iops 1 --type iops; done
Next thing to check is block alignment (worth doing a comparison between your virtual and physical machines). This document discusses the important aspects and what you can do about it.
It is important to look at the Sybase block size, the file system block size, and the Nimble performance policy (block size, cache, compression) to make sure they all agree. In general the best performance then comes from setting all of these to the same value if possible. As you mention, the performance policy for ESX default is 4k block size. Oracle OLTP is 8k block size. Both OK, but not optimal.
Give those couple of things a shot, and let us know how you go.
Cheers,
John
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2014 05:01 PM
11-12-2014 05:01 PM
Re: Sybase on Linux on VMware Performance Questions
John,
I just realized that I never replied to thank you for your through response. I thought that I would give you an update on how things have progressed with my setup.
iSCSI Direct from VM VS VMware VMFS
So this was a head scratcher for a while, I tried mounting the volumes every which way and I kept getting mixed performance figures, and then I ran across a document from VMware and Sybase relating to running ASE in a virtual environment (http://www.vmware.com/files/pdf/SAP-Sybase-Adaptive-Server-Enterprise-on-VMware-vSphere.pdf). This turned out to be a game changer for me as it states that running my database on a VMFS volume is an accepted best practice. The paper goes on to recommend several things that I ended up testing and then changing in my production environemt including the use of the Paravirtual SCSI adapter and putting the logs and database volumes on separate adapters as well. I also went through my VMware host iSCSI configuration as per Adam Herberts article on the Imporatance of Path Change Settings in VMware (Re: Importance of path change settings in VMware) which made an immediate improvement in the throughput of my whole virtual environment. The end result is that a database dump process that used to take roughly 3 hours to complete is now finishing in 15 minutes, and my users are actually complaining because they used to be able to start a report then go get a cup of coffee while it ran, now the reports are coming up within a few seconds. Needless to say coffee consumption has decreased in the office....
Thank you again for your response and your product!
Mike