- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Re: EVA: no read-ahead caching using 256k blocksiz...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-22-2010 12:41 PM
тАО05-22-2010 12:41 PM
EVA: no read-ahead caching using 256k blocksize?
I have an EVA4400 with 96x1TB FATAs.
During some tests (VTL) I noticed a strange thing:
whenever I use 256k blocksize for sequential operations (write xx GB, then read this) I never see any read-ahead caching activity in EVAperf (only read miss).
doing exactly the same stuff, only using 128k or 64k blocksize, results in 100% read hit operations and of course much higher throughput...
Is this a known issue/bug/feature?
I wanted to use the biggest blocksize to transfer as much data as possible per I/O to get high MB/s values (copy from disk to tape), but having only read-miss IO with high blocksize results in bad MB/s :-(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 03:49 AM
тАО05-24-2010 03:49 AM
Re: EVA: no read-ahead caching using 256k blocksize?
If the Eva sees too many cache misses, it disables the cache internally. Maybe by changing the blocksize you suffer from this behaviour.
Some tips :
- If you are reading big blocks, Raid5 is faster than Raid10.
- If you are using Vmware or Linux, you can try to play with the scheduler, so that the stream gets more sequential than random (search for io scheduler vmware or noop, deadline, anticipatory, and cfq on google)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 04:03 AM
тАО05-24-2010 04:03 AM
Re: EVA: no read-ahead caching using 256k blocksize?
it's a single stream operation: one host (the VTL, to verify a virtual tape drive) writes one stream of data pure sequentially to the EVA, then reads this stream (to verify it). So there's no more "optimization" potential from the host side.
I know that EVA stops reading ahead if it can't "detect" sequential IO. But why can't it detect any seq. using 256K blocksize, but detects 100% seq. using 128K or 64K? The rest of the test stays the same, and it's the nature of this test that it's pure sequential IO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 04:21 AM
тАО05-24-2010 04:21 AM
Re: EVA: no read-ahead caching using 256k blocksize?
If the host is Linux, try changing the scheduler, changing from cfq to noop or deadline, which *might* help a bit (the stream will be surely completely sequential then, other processes won't be able to send data in between the stream), even it is only on a single host.
I know there were plans to change the caching mechanism in future firmwares (the part that disables the readcache), but this is ofcourse something for in the future.
Btw, which tool do you use to analyze the Evaperf files?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-24-2010 05:34 AM
тАО05-24-2010 05:34 AM
Re: EVA: no read-ahead caching using 256k blocksize?
evaperf: for this simple test it's just my eyes: I have a evaperf vdg -cont 2 running, then start the VTL test operation through backup application and then look at the read miss i/o and read hit i/o. The numbers (IO, MB/s) are cross checked with performance output of VTL software (Falconstor) and match them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2010 12:19 AM
тАО05-25-2010 12:19 AM
Re: EVA: no read-ahead caching using 256k blocksize?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2010 04:57 AM
тАО05-25-2010 04:57 AM
Re: EVA: no read-ahead caching using 256k blocksize?
well, I see values way below those IOPS/MBs, I assume the reason is single stream (the more tape I validate at the same time, the higher the total throughput goes. I've read often, than SAN arrays do not perform best for single host single stream).
But for me it's strange that I never see/saw read hits with 256k. Your answers seems to show that this is by design.
And because I see better througput with read-ahead caching than from the spindles (even if they should be capable to deliver more), I assume I would have to change my settings to never go higher than 128k blocksize.
I had/have similar issues with a E4000 with 32 FATA spindles, from where I do read database dumps (backup). Those are also highly sequential (if not even pure), however I never get more than 35MByte/s single stream read. When sometimes caching kicks in (I never found out, why it sometimes works and sometimes not, it's the same identical file that is read), the throughput skyrockets to 90-200 MB/s.
So, as Eric wrote: the read ahead caching of EVA seems to be not working really well (well, IF it works, then it's super fast, but often it not works).
Does anybody know a roadmap, if/when that should be improved?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2010 05:21 AM
тАО05-25-2010 05:21 AM
Re: EVA: no read-ahead caching using 256k blocksize?
See the attached picture.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2010 05:48 AM
тАО05-25-2010 05:48 AM
Re: EVA: no read-ahead caching using 256k blocksize?
I checked the HBA settings (QLogic), on the VTL the Execution Throttle is set to 255 per HBA, on the windows mediaagents the HBAs towards the VTL are even 65535...
so, that's not the issue.
well, I think I have to accept the fact, that the EVA doesn't like 256k blocksize.