HPE EVA Storage
1752803 Members
5702 Online
108789 Solutions
New Discussion юеВ

Re: EVA: no read-ahead caching using 256k blocksize?

 
S. Boetticher
Regular Advisor

EVA: no read-ahead caching using 256k blocksize?

Hello together,

I have an EVA4400 with 96x1TB FATAs.

During some tests (VTL) I noticed a strange thing:
whenever I use 256k blocksize for sequential operations (write xx GB, then read this) I never see any read-ahead caching activity in EVAperf (only read miss).
doing exactly the same stuff, only using 128k or 64k blocksize, results in 100% read hit operations and of course much higher throughput...

Is this a known issue/bug/feature?

I wanted to use the biggest blocksize to transfer as much data as possible per I/O to get high MB/s values (copy from disk to tape), but having only read-miss IO with high blocksize results in bad MB/s :-(

8 REPLIES 8
Eric Delaet
Occasional Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

Hi,

If the Eva sees too many cache misses, it disables the cache internally. Maybe by changing the blocksize you suffer from this behaviour.

Some tips :

- If you are reading big blocks, Raid5 is faster than Raid10.
- If you are using Vmware or Linux, you can try to play with the scheduler, so that the stream gets more sequential than random (search for io scheduler vmware or noop, deadline, anticipatory, and cfq on google)
S. Boetticher
Regular Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

it's VRAID5
it's a single stream operation: one host (the VTL, to verify a virtual tape drive) writes one stream of data pure sequentially to the EVA, then reads this stream (to verify it). So there's no more "optimization" potential from the host side.

I know that EVA stops reading ahead if it can't "detect" sequential IO. But why can't it detect any seq. using 256K blocksize, but detects 100% seq. using 128K or 64K? The rest of the test stays the same, and it's the nature of this test that it's pure sequential IO.
Eric Delaet
Occasional Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

That's a good question. The cache from the Eva (I have the 4400, but the 8400 has the same algorithms) works very bad for me as well :-(

If the host is Linux, try changing the scheduler, changing from cfq to noop or deadline, which *might* help a bit (the stream will be surely completely sequential then, other processes won't be able to send data in between the stream), even it is only on a single host.

I know there were plans to change the caching mechanism in future firmwares (the part that disables the readcache), but this is ofcourse something for in the future.

Btw, which tool do you use to analyze the Evaperf files?
S. Boetticher
Regular Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

as I wrote, it's pure sequential by design of this test (only 1 VTL host connected to EVA, test is to verify 1 virtual drive through writing xx GB in one stream (that's the ONLY I/O occuring) and then reading it back. and it works perfect for <256k.

evaperf: for this simple test it's just my eyes: I have a evaperf vdg -cont 2 running, then start the VTL test operation through backup application and then look at the read miss i/o and read hit i/o. The numbers (IO, MB/s) are cross checked with performance output of VTL software (Falconstor) and match them.
V├нctor Cesp├│n
Honored Contributor

Re: EVA: no read-ahead caching using 256k blocksize?

The EVA firmware is optimized to deal with blocks of 128 KB or less. With 96 1TB disks, you can get 7600 IOPS, even when doing random reads. If each read is 128KB, that's almost 1 GB/s. Not that you're going to reach that on a EVA4400, the limit is a little less.
S. Boetticher
Regular Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

Hi vcespon,
well, I see values way below those IOPS/MBs, I assume the reason is single stream (the more tape I validate at the same time, the higher the total throughput goes. I've read often, than SAN arrays do not perform best for single host single stream).

But for me it's strange that I never see/saw read hits with 256k. Your answers seems to show that this is by design.

And because I see better througput with read-ahead caching than from the spindles (even if they should be capable to deliver more), I assume I would have to change my settings to never go higher than 128k blocksize.

I had/have similar issues with a E4000 with 32 FATA spindles, from where I do read database dumps (backup). Those are also highly sequential (if not even pure), however I never get more than 35MByte/s single stream read. When sometimes caching kicks in (I never found out, why it sometimes works and sometimes not, it's the same identical file that is read), the throughput skyrockets to 90-200 MB/s.

So, as Eric wrote: the read ahead caching of EVA seems to be not working really well (well, IF it works, then it's super fast, but often it not works).

Does anybody know a roadmap, if/when that should be improved?
V├нctor Cesp├│n
Honored Contributor

Re: EVA: no read-ahead caching using 256k blocksize?

You may be limited by the HBA maximum queue depth. The EVA does not process more I/Os because the HBA does not request them.

See the attached picture.

S. Boetticher
Regular Advisor

Re: EVA: no read-ahead caching using 256k blocksize?

good point.
I checked the HBA settings (QLogic), on the VTL the Execution Throttle is set to 255 per HBA, on the windows mediaagents the HBAs towards the VTL are even 65535...
so, that's not the issue.

well, I think I have to accept the fact, that the EVA doesn't like 256k blocksize.