Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2009 04:33 AM
06-22-2009 04:33 AM
Using dd
I've read some of the forum topics regarding this subject but would appreciate some guidance on the subject.
I am investigating copying Oracle .dbf files from one filesystem to another using 'dd' for example:
dd if=/fs1/file1.dbf of=/fs2/file1.dbf &
dd if=/fs1/file2.dbf of=/fs2/file2.dbf &
dd if=/fs1/file3.dbf of=/fs2/file3.dbf &
I have not included the 'bs=' option as I am currently testing different variations from 128k - 4096k (The trade-off being the system time)
The reason for this approach is:
a) The files are greater than 2GB in size
b) It appears to be relatively fast.
The Database is offline and as such the files are static.
What I am asking is:
A) Is this, probably, the best method?
B) Can anyone highlight any potential issues with this approach?
Thanks in anticipation.
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2009 04:43 AM
06-22-2009 04:43 AM
Re: Using dd
a) Probably not the best method. An OS copy will work very well with the database offline.
b) Database corruption, high i/o, though the method should pass rigorous testing.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2009 04:47 AM
06-22-2009 04:47 AM
Re: Using dd
I think some simple timeings with and without 'bs' will convince you that blocking reduces the overall copy time.
The file size has nothing to do with using or not using 'bs'. Using 'bs' means that no incore buffer-to-buffer copy occurs which speeds things up even more.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2009 04:52 AM
06-22-2009 04:52 AM
Re: Using dd
You did not ask this in your question I know, but my suggestion is to use vxdump|vxrestore commands to copy huge chinks of data from one place to another on the same system. In our SAN migration, we have found this to be the fastest way to replicate data from a filesystem on the old storage array to another filesystem on the new array.
Hope this helps.
UNIX because I majored in cryptology...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-22-2009 05:31 AM
06-22-2009 05:31 AM
Re: Using dd
Many thanks for the information thus far:
I'll investigate the database corruption, if the files are static though I'm hoping that won't be an issue.
Best 'bs=' size at the moment is looking like 512k. When timed this gives the best throughput against the amount of system time utilised.
Tried vxdump|vxrestore but this doesn't appear to work at file level (unless I'm missing something) however the information was useful for filesystems so thanks for that.
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-11-2010 07:33 AM
02-11-2010 07:33 AM