HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Expected speeds for MSA on Linux
Operating System - Linux
1827422
Members
4070
Online
109965
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2010 08:25 AM
11-12-2010 08:25 AM
Expected speeds for MSA on Linux
Hello
I have a MSA 2000sa G2 with 3 volumes (RAID 10 - six 73GB SAS drives)
Using following multipath.conf
### The defaults section
defaults {
udev_dir /dev
polling_interval 20
selector "round-robin 0"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_min_io 120
path_checker tur
rr_weight uniform
failback immediate
no_path_retry 12
}
### For MSA2xxxsa arrays - the device is MSA2324sa
device {
vendor "HP"
product "MSA2324sa"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_min_io 120
path_checker tur
path_selector "round-robin 0"
rr_weight uniform
failback immediate
hardware_handler "0"
no_path_retry 18
}
blacklist {
wwid 3600508b1001039565952315156500200
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*"
}
On the server, the volumes are mapped in /dev/mapper/lvtest1 .. 2 .. 3
so i mount it as /test and run a dd command
dd if=/dev/zero bs=64k count=2500000 of=/test/dd && sync
and the rate is between 200 and 210 M bytes/seconds
is this speed good or bad ?
thank you very much
I have a MSA 2000sa G2 with 3 volumes (RAID 10 - six 73GB SAS drives)
Using following multipath.conf
### The defaults section
defaults {
udev_dir /dev
polling_interval 20
selector "round-robin 0"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_min_io 120
path_checker tur
rr_weight uniform
failback immediate
no_path_retry 12
}
### For MSA2xxxsa arrays - the device is MSA2324sa
device {
vendor "HP"
product "MSA2324sa"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_min_io 120
path_checker tur
path_selector "round-robin 0"
rr_weight uniform
failback immediate
hardware_handler "0"
no_path_retry 18
}
blacklist {
wwid 3600508b1001039565952315156500200
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*"
}
On the server, the volumes are mapped in /dev/mapper/lvtest1 .. 2 .. 3
so i mount it as /test and run a dd command
dd if=/dev/zero bs=64k count=2500000 of=/test/dd && sync
and the rate is between 200 and 210 M bytes/seconds
is this speed good or bad ?
thank you very much
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP