- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: redo writes and lv_write_rate
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-10-2004 12:54 PM
тАО07-10-2004 12:54 PM
redo writes and lv_write_rate
At the same time Oracle statistic for number of bytes produced for redo per second (statspack.redo_size+redo_wastage) matches number of bytes written into redo log logical volume per second as reported by HP glance advisor (lv_write_byte_rate). ~6.5M per second
Since single write would be small 6.5M/500~13K
it does not look like we'd be hitting max_phys_io or something like this....
Redo and everything else is striped across 2 EVAs 5000 configured with 16 LUNs each with stripe size of 1M.
I think I remember something about EVA mirroring all writes but this probably would not be visible and logical volume level.
Does anybody have any idea why lv_write_rate is about 4 times of LGWR redo writes in this scenario?
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-11-2004 06:57 AM
тАО07-11-2004 06:57 AM
Re: redo writes and lv_write_rate
When you said everything is striped across 2 EVAs, I assume it is LVM striped. If so, how is each LV striped? An 'lvdisplay /dev/vgxx/lvolx' will show you the number of stripes that LV is using. I believe your LVs are striped with 4 LUNs. Hence each IO generated by oracle is getting split into 4 LVs.
Though it seems IO overhead on the system due to striping, it will be receiving the responses back from the four luns almost simultaneously (and lesser time to process each request). So, LVM striping can offer you better performance if done well.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-11-2004 07:00 AM
тАО07-11-2004 07:00 AM
Re: redo writes and lv_write_rate
Is the DB in archive mode ?
If so, is the archive volume on the same FS as the redo log ?
If they are indeed on the same FS then after the DB will finish writing to a redo log file it will create an archive, which will cause another write to the FS.
I am not a DBA but I also know that in my organization the redo logs are saved twich, but on different FS. Are you saving another redo log file ? Is it on another volume ?
btw, I noticed that the i/o statistics of sar are much more accurate than those of glance, but the problem is that they bring the data in the raw device level, and not FS level. If you are measuring the same device you can use "sar -d X Y" where X is the interval, and Y is the number of times to run the check. (just in case you don't know it :-) ).
I hope it helps,
Oved
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2004 05:21 AM
тАО07-14-2004 05:21 AM
Re: redo writes and lv_write_rate
For example:
"lvcreate -n redo01 -i 16 -I 1024 /dev/vg02"
"lvextend -L 256 /dev/vg02/redo01 \
/dev/dsk/c63t0d1 /dev/dsk/c70t0d1 \
/dev/dsk/c63t0d2 /dev/dsk/c70t0d2 \
/dev/dsk/c63t0d3 /dev/dsk/c70t0d3 \
/dev/dsk/c63t0d4 /dev/dsk/c70t0d4 \
/dev/dsk/c63t0d5 /dev/dsk/c70t0d5 \
/dev/dsk/c63t0d6 /dev/dsk/c70t0d6 \
/dev/dsk/c63t0d7 /dev/dsk/c70t0d7 \
/dev/dsk/c63t1d0 /dev/dsk/c70t1d0"
Thank you for both replies but they do not seem to help:
Sri assumption about 4LUNS as stripe members is incorrect. Also since each write is only 13K and stripe width is 1M why whould LVM write into all stripe members?
And since we use "stripe and mirror everything" methodology I cannot measure write size by sar just for redo since each LUN contains redo and the datafiles as well.
Please :
There must be someone in this forum with enough knowledge of LVM to have at least a theory that fits this scenario?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2004 08:27 AM
тАО07-14-2004 08:27 AM
Re: redo writes and lv_write_rate
I misread your message thinking that the striping was on the EVA side with 1M stripe size hence I made the general assumption.
I know a bit of LVM but very little about the database. It's pretty straightforward on how LVM handles the requests. But I wonder if oracle itself is further splitting each write request. I would check the db_block_size and see if it is playing any part in it.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2004 09:57 AM
тАО07-14-2004 09:57 AM
Re: redo writes and lv_write_rate
Interesting question. Good initial analysis.
You are sure you are using the raw device right? So that would be /dev/vg02/rredo01 in Oracle.
Does IOSTAT match GLANCE for the IO rates?
I guess that would be hard to correlate as it is all spread out huh? Maybe verify at the aggregate level?
Are you using securepath? I seem to recall some trouble with glance and EVAs due to multiple alternate scsi routes (often 4 per HBA per unit).
I like the SAME (Stripe And Mirror Everything) in general, but I feel it is NOT appropriate for Redo. There is no benefit in your case, only cost.
Any single 'disk' can do 10+MB/sec specially if that 'disk' is behind a write back cache and spread over a group of real disks.
I would just carve that 250 MB froma single PV. Why maximize head movement for what is a sequential write usage? (I would also use a small (8 member) disk group in the EVA for redo (and other) to further minimize the entropy).
Also... Just 250MB/Redo? at 6.5Mb/sec it will fill up in less than a minute?! Do you need the short logs for log shipping? If not, why accept those frequent log switches and implied checkpoints? Why not create 1 or 5GB log files and have 5-minutes to an hour between checkpoints? Much easier on the system! (it often drastically reduced undo writes).
fwiw,
Hein.