- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: MSA2050 MPIO Configuration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2021 05:58 AM
01-05-2021 05:58 AM
MSA2050 MPIO Configuration
Hi,
was just loking for some advice , we currenlty have an MSA2050 (all SSD), connected using 10GB ethernet thorugh the flexconnect switches ,connect is VMhost server (server 2019), Dual port 10GB ethernet didicated to the Storage & Dual port 10GB ethernet didicated to the VM's
i have the Luns presented as ISCSI to the host using the (stroage 10gb connections - teamed), but i was wondering if it was better to split the team and move these to MPIO , and would this give me better through put, at the moment i present a few luns to the host server and then pass these through to the my file server as additonal hard drives (i have fogotten the teminolgy), these are then basivcally my shared drives for the company , but i experience slower speeds when moving data and accessing the drives,
when backing up for the Stroage itself (Veaam), the process can give me spoeeds over 2GBs, so the storage does seem fast there ...
also been looing at MPIO and not sure how to configure this on server 2019 , do i need a DSM file , whne i luach MPIO i can see "MSFT2005iscsiBusType_0x9" in device hardware ID , but im not sure if im menat t claim the device as some guides have suggested - "mpclaim –n –I –d “HPE MSA 2050 SAN”"
any help would be appreciaited
paddi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-05-2021 07:26 AM
01-05-2021 07:26 AM
Re: MSA2050 MPIO Configuration
As per my understanding multipathing helps with redundant path, in case of one path fails other path should help with the availability. IO happens through one path before moving to another path. Specially Round Robin multipath policy. However, due to all paths active and can handle IOs in parallel, that increase throughput overall. There are more into this but difficult to explain here.
To help with MPIO configuration, there is no more DSM file required. Now it's Operating System specific native multipathing only. You can check below guide and it will help you,
https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=emr_na-a00038738en_us
Hope this helps!
Regards
Subhajit
I am an HPE employee
If you feel this was helpful please click the KUDOS! thumb below!
*************************************************************************
I work for HPE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-06-2021 01:23 AM
01-06-2021 01:23 AM
Re: MSA2050 MPIO Configuration
Hi,
thank you for the reply , i will look inot this , but i think my issue way actually be with Hyper-V and may be a limitation within Hyper-visor itself ... but i will look inot the MPIO setup and get this configured
thanks
paddi