Operating System - HP-UX
1752618 Members
4481 Online
108788 Solutions
New Discussion юеВ

Increase Physical IO size

 
Fred Ruffet
Honored Contributor

Increase Physical IO size

Hi !

My server runs 11i and is connected to a SAN. To increase performance, I would like to increase the size of IOs sent to the SAN. Whatever I do, it seems to be limited to 256KB, but I don't know where. Can't it be changed ?
--

"Reality is just a point of view." (P. K. D.)
9 REPLIES 9
Hein van den Heuvel
Honored Contributor

Re: Increase Physical IO size


You'll ned to specify more information to get usefull help. For example if the disk involved is STRIPED with LVM over with a 256k chunk size then clearly this will be th emax you'll see onm the wire.
Filessystem (which)? Raw? LVM? ...

THe SAN is.., EVA? XP? EMC?

Also, my experience is that while in general you can see a nice increase in throughput as you increase the IO size from say 8kb to 16kb to 64kb and more. But once you beyond 256K the improvements become marginal: a few percent better if that.

Hein.

Fred Ruffet
Honored Contributor

Re: Increase Physical IO size

LV is striped with a strip size of 512K. This is why I want 512K IOs.

SAN is a STK D280.
--

"Reality is just a point of view." (P. K. D.)
Hein van den Heuvel
Honored Contributor

Re: Increase Physical IO size



How are you ensuring that the initial IO starts at a chunk boundary? If you start anywhere else but on the exact 512 boundary, tehn the system will always need some data from one disk, some more from the other to build the whole buffer, averaging out to 256KB IOs.

Now if you request 1024KB (or more) does the average creep up to over 300KB/IO? Such 1MB IO might be executed as 128KB from first drive, 512 from next, 374KB from last with an average of 1024/2 = 300+ KB.

Hein.
Fred Ruffet
Honored Contributor

Re: Increase Physical IO size

It's probably true, that the first IO doesn't start at a 512K starting point. So if I read 512K, I have the end of a bloc, and the start of the next. Then I have twice the physical IOs compared to the logical ones. But If I read greater blocs, this rate should decrease. I tried to make 8192Ko IOs, but I kept those 256Ko physical reads...
--

"Reality is just a point of view." (P. K. D.)
Bill Hassell
Honored Contributor

Re: Increase Physical IO size

Physical I/O's are a very complex topic. There is a long discussion of how I/O's are blocked in the kernel as part of the HP-UX internals course, but generally speaking, there is no way to tune this. Changing the the kernel handles I/O's doesn't really improve things. For instance, if you run vi to edit the /etc/issue file, the file is usually a few hiundred bytes so forcing a 256Kb read would be a waste. Similarly with a database, the record size is defined by the database and the I/O size requested by the database manager is very dependent on the query or update being done.

With true serial reads or writes, a larger blocksize could be useful but HP-UX will concatenate adjacent blocks up to the 256Kb size. Serial reads in a database usually indicate the need for another index.


Bill Hassell, sysadmin
Fred Ruffet
Honored Contributor

Re: Increase Physical IO size

Server is dedicated to datawarehouse Oracle Databases. Reads and writes are BIG and sequential. So I really need to increaze IO size (For one vi on /etc/issue, I will make thousands of reads on my Tera Sized Databases). I'm bypassing the HP-UX cache, and make direct IOs. But those IOs don't seem to be sizeable over 256K.
--

"Reality is just a point of view." (P. K. D.)
Hein van den Heuvel
Honored Contributor

Re: Increase Physical IO size

Are you sure more that this observed 256 is 'a bad thing'.
It might just be helping! With the right infrastructure, a single 1MB read will finish slower than 4 (or 5) concurrent 256KB IOs filling the target buffer in parallel: More cables, more controllers, more spindles to help out!

You still have not mentioned the file system. (or I overlooked it)
For your test, are you using dd + raw?
Oracle will use DIRECT_IO, unbuffered.
Is your test using that (Mount option 'direct') or could it be buffering?

Check out: vxtunefs
Check out kernel param: scsi_maxphys - maximum allowed length of an I/O on all SCSI devices (default 1048576)

Search the hpux forum for 'io' in the subject:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=440871
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=372950

Hein.
Bill Hassell
Honored Contributor

Re: Increase Physical IO size

And to amplify about bypassing the buffer cache, make sure your Oracle data volumes are mounted with: convosync=direct,mincache=direct,nodatainlog


Bill Hassell, sysadmin
Fred Ruffet
Honored Contributor

Re: Increase Physical IO size

More informations :

. my test consists of a dd on the raw lv. FS is VxFS, but I don't use it for the test. I have approximativly the same results when using the FS (access to datafiles) whithout having use of vxtunefs (FS is mounted with log,nodatainlog,largefiles,mincache=direct,convosync=direct)

. The SAN block size is 64K. Each LUN is a RAID 5 over 9 disks. my VG is stripped over 11 of these LUNs. I wanted to have an elementary IO of 64k*8 (a disk bloc read * number of util disks) and to strip over 11 PVs. Doesn't it sounds good ?
--

"Reality is just a point of view." (P. K. D.)