1826410 Members
4225 Online
109692 Solutions
New Discussion

Solid State Disk - anyone have experience?

 
SOLVED
Go to solution
Stan PIetkiewicz_1
Occasional Advisor

Solid State Disk - anyone have experience?

Looking at buying SSD - wondering if anyone has seen performance improvements other than claims by vendors. We have a large SAS application group who are hitting workspace hard - large, large amounts of I/O - large files etc. Nothing left to try for solution so we're looking at 100gig - expensive but if claims of 300-500Mb/sec are correct we may go for it. Anyone seen it in action?
It is statistically possible that my opinion is the same as someone else's, but it is still my opinion.
9 REPLIES 9
A. Clay Stephenson
Acclaimed Contributor

Re: Solid State Disk - anyone have experience?

Here is one discussion:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?admit=716493758+1103577549963+28353475&threadId=134174

Improvements of 25X are not uncommon BUT nothing is a real substitute for bad code.
If it ain't broke, I can fix that.
Tim D Fulford
Honored Contributor

Re: Solid State Disk - anyone have experience?

Hi

Texas Memory Systems RamSan.. the dogs b******s...

We use RamSan320 for high IO DB. A newer version is on market RanSan330, more scalable etc.

http://www.e-business.com/products/ramsan330.htm

The other things you can do with 330 (as in article) is to use it as a front-end caching system as a performance accelarator (I've not tried it but it certainly sounds like a good idea). We use it as 100% storage device.

Regads

Tim

-
Stan PIetkiewicz_1
Occasional Advisor

Re: Solid State Disk - anyone have experience?

Thanks for the leads guys .. we are looking at RamSan as well as Bitmicro flash drive ssd. demos coming in January ...hoping to be pleaseantly surprised at performance. Price point on flash is supposed to be considerably lower than SanRam but there may be realiability issues. We're not really, really concerned there though ... all temp files in workspace - not a problem if they are lost ... just restart processes.
It is statistically possible that my opinion is the same as someone else's, but it is still my opinion.
Zinky
Honored Contributor

Re: Solid State Disk - anyone have experience?

Colin,

Instead of an SSD Solution -- why not invest on a high end "cache-centric" array like the XP with boatloads of cache and "proper" I/O layout.

Most of the I/O demanding environments I have built and managed saw impressive increases in performance simply by doing your I/O layout right -- and that is using (and having) enough fibre channel connections to your arrays and laying out your storage units as used by the server/application. On EVA's, XP's, Hitachi's and certain FC Jbods - my dual 2GBps servers can sometimes see a sustained I/O throughput of close to the capacity of the FC -- 180 MB/s one way or 360 MB/s both ways. On my 4 and 8 way-FC connected servers - the throughput scales as well.

And if you are really starved for I/O - you can utilize what's called a "Cache-LUN". The XP series for one has the ability to set aside LUNs that are purely cache memory. These LUNs do a great job as a very fast storage and DB accelerators.

Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Tim D Fulford
Honored Contributor
Solution

Re: Solid State Disk - anyone have experience?

cost of 128 GB RamSan ~ 200,000 US$
cost of XP 4 disks + 128GB RAM ~ 600,000 US $

Tim
-
Hein van den Heuvel
Honored Contributor

Re: Solid State Disk - anyone have experience?


Hmmm,

SSD devices are great for random reads to obtain high IO/sec rates.
(near) random writes can often more economically be obtained through write-back caches (which an SSD really is)

High MB/sec, as you seem to require, often come with large IOs and near sequential access: several MBs from a disk area. Many raid controllers can achieve this with striping and read-ahead, and those same write back caches.

If you need 300 - 500 MB/sec your first concern should probably be the connection infrastructure. With 2Gb fiber you'll need at the very least 3 of them going concurrently. How many HBA's? can you keep them busy (multiple users? software (LVM) striping?...)

We routinely test EVA's with 4 fibres connected at over 300 MB/sec (READ).
Write speed depends heavily on the redundancy levels selected, and may be limited to 100mb/sec for sertain setups.

Please clarify further: how many IO/sec?
Read/write ratio? average IO size? reading large files, or random IOs?

for your entertainment: My buddies in the now defunct AdvFS for HPUX project were testing their stuff in a large supersome setup in Nashua NH. They achieved well over 10GB/sec to a single file. Yes, more than 10 Gigabyte per second. Of course this was with the wind from behind in a carefully controlled setup with near unlimitted resources: Dozen's of fibres, hundreds of disks, many EVA's.

hth,
Hein.
Vincent Fleming
Honored Contributor

Re: Solid State Disk - anyone have experience?

Your mileage may vary - significantly.

There are several factors to keep in mind.

1. Your workload. Assuming it's a database, most workloads are rather random, and small-block. Don't forget that when a storage vendor quotes MB/s numbers, it's ALWAYS large-block sequential, not small-block random. You probably won't come anywhere near 300-500 MB/s...

2. Your host system. Don't forget that your host system must be able to drive the device (SSD, disk array, etc) effectively. You're going to need 4-6 FC adapters, and enough spare CPU and memory to support data rates/IOPS like what you're hoping for.

3. Mr. Stephenson makes a very good point. I've also seen SSD's deployed, and have a good effect, but there's no substitute for fixing the application. If it's beating the snot out of some temp tables, you need to look at WHY, and figure a better way to make it work. There's always a better way, and it's generally MUCH cheaper to fix the application than it is to buy enough hardware for it not to matter - and buying hardware is hardly ever as effective at increasing performance as fixing the application.

Good luck!

Vince
No matter where you go, there you are.
Zinky
Honored Contributor

Re: Solid State Disk - anyone have experience?

Usually, we only consider scaling our I/O subsystem (storage and paths) IF our applications folks have proven that tuning has been done already and the application is optimally working and indeed the problem is simply a resource limitation. Today's Database backended applications are now so easy to tune and optimize that the SysAdmin's are often put on a spot and could no longer invoke the "Fix your SQL/code!" approach.

SSD systems are merely caches and disk accelrators which are now features of high end disk arrays. If your High Speed disk requirements can be satisfied by cache systems (ie. a Cache LUN) then go for high end arrays or beef up your high end array.

However, if you really intend to store everything in your SSD storage - say a 100GB DB in a 128GB SSD - then your I/O expectations will probably be better realized. With such a configuration - I think it will be best to go raw... and have a very good backup system and power supply as well.


Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Stan PIetkiewicz_1
Occasional Advisor

Re: Solid State Disk - anyone have experience?

Thanks for all the info .. will be evaluating SSD in Jan 05
It is statistically possible that my opinion is the same as someone else's, but it is still my opinion.