- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- problems disk io and testing internal vs san
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 03:49 AM
05-04-2004 03:49 AM
time dd if=/dev/rdsk/cxtxdx of=/dev/null bs=1024 count=500 (used 5000 for internal)
yields dataset of:
SAN (xiotech) 1.5 to 2.64 seconds (500 meg read)
Internal seagate .5 to .6 seconds (5000 meg read)
This doesn't seem correct that the 2 gig FC and san is reading at 189-319 meg/sec and the internal scsi is reading at 8333 meg/sec.
Is the SAN working as expected and what can I check to validate or test further what is going on.
Regards, doug
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 05:10 AM
05-04-2004 05:10 AM
SolutionYour blocksize used for the test is unrealistically small. Well, unless the targetted application is indeed doing 1KB IO.
Currently you are really measuring IO/sec, not MB/Sec. Do the same experiment with bs=1024k
Granted, the SAN still seems slow.
That disk will be doing read-ahead thus it can respond right-away from it's cache.
Is your SAN controller doing read-ahead? (It probably should, buy a better one if it does not).
How many IO/sec is that SAN appliance rated at? What latency? ANY latency will kill the IO/sec rate. What is behind the controller?
Yoou may find that is you start a few concurrent stream (preferrably with an oseek to start at differnt places that the SAN solution scales with the number of streams where the direct connect disks is limited to what you see now and is even likely to drop back in performance as yo increase streams (thrashing).
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 08:20 AM
05-04-2004 08:20 AM
Re: problems disk io and testing internal vs san
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 08:27 AM
05-04-2004 08:27 AM
Re: problems disk io and testing internal vs san
This kind of test is very simple, and apart from anything else doesn't in any way represent a real world situation - that's not to say that the test isn't valid or correct - only that I wouldn't put too much faith in a simple single threaded sequential read test. A better approach would be to use a more advanced testing tool such as iozone, available here:
http://www.iozone.org/
HTH
Duncan
I am an HPE Employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 08:35 AM
05-04-2004 08:35 AM
Re: problems disk io and testing internal vs san
It don't seems correct to me. What king of preformance are you interested in, IOPS or throughput.
I have used the "Postmark" for disk/filesystem benchmarks.
http://www.netapp.com/tech_library/3022.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 08:43 AM
05-04-2004 08:43 AM
Re: problems disk io and testing internal vs san
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 08:54 AM
05-04-2004 08:54 AM
Re: problems disk io and testing internal vs san
Stripe size may be an issue, the sys admin took the defaults, I think, when the vg was created, which is a 1 meg stripe. Raid level is 10. It was not created as a pvg. We are only using one of the 2 available fc cards.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-04-2004 09:07 AM
05-04-2004 09:07 AM
Re: problems disk io and testing internal vs san
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2004 02:56 AM
05-05-2004 02:56 AM
Re: problems disk io and testing internal vs san
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2004 03:44 AM
05-05-2004 03:44 AM
Re: problems disk io and testing internal vs san
The dd you did over fibre channel I believe. The thing is with FC is it really likes a large block size, there is something like a 100 byte overhead on each fibre "frame".
Secondly.. I do not know how the xiotech works does the LUN /dev/rdsk/cxtxdx is physically a single disk or is it really a hardware RAID0 or RAID1+0 over a number of disks. If so, the hardware stripe width will be of issue here as it is probably greater than the block/frame size. e.g. if it was, say, 64kB the dd will read 64x 1k blocks before moving onto the next physical disk, hence you will be stressing a single disk at a time. If there are say 4 disks in the RAID0 stripe or 8 in RAID1+0 stripe with a stripe size of say 64k then best read performance will be for a block or frame of 4x64k ==> 256k, so
dd if=/dev/rdsk/cxtxdx bs=256k count=2000 of=/dev/null
This is a 500MB read and should "spin-up" all the physical disks in the LUN.
The above is not really such a neat idea as a benchmark as
o you probably access data from a filesystem and not straight from the disk. So any "tests" should be on a filesystem and not the raw disk.
o Unless your application reads masses of data sequentially dd is probably not the best benchmark tool. Unfortunately you are the only one that can judge that! Others above have offered info on other random tools.
Good luck
Tim