- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- MSA500 G2 and multipath
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 04:08 AM
01-03-2007 04:08 AM
i have a msa500 g2 with a 4 port scsi i/o module and 2 proliant dl 385 with 2 smartarray 642.
i try to use the md multipath to install service guard but i have problem when i try to mount the device /dev/md0.
if i use the phisical device /dev/cciss/c1d1p1 or /dev/cciss/c2d1p1 it works fine and i can write my data.
when i try to mount the device /dev/md0 i have a lot of error in /var/log/messages about the io paths and the mount command freeze.
i use this command to create the md multipat mdadm -C /dev/md0 --level=multipath --raid-disks=2 /dev/cciss/c1d1p1 /dev/cciss/c2d1p1, after i create the fs raiuser or ext3 without problems. but when i try to mount it fails.
can anyone help me?
regards, Cristian Mazza
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 05:02 AM
01-03-2007 05:02 AM
Re: MSA500 G2 and multipath
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 02:24 PM
01-03-2007 02:24 PM
Re: MSA500 G2 and multipath
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 06:51 PM
01-03-2007 06:51 PM
Re: MSA500 G2 and multipath
Dec 29 15:39:55 cssrvfe01 kernel: Operation continuing on 1 IO paths.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c1d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: MULTIPATH conf printout:
Dec 29 15:39:55 cssrvfe01 kernel: --- wd:1 rd:2
Dec 29 15:39:55 cssrvfe01 kernel: disk0, o:0, dev:cciss/c1d2p1
Dec 29 15:39:55 cssrvfe01 kernel: disk1, o:1, dev:cciss/c2d2p1
Dec 29 15:39:55 cssrvfe01 kernel: MULTIPATH conf printout:
Dec 29 15:39:55 cssrvfe01 kernel: --- wd:1 rd:2
Dec 29 15:39:55 cssrvfe01 kernel: disk1, o:1, dev:cciss/c2d2p1
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c1d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2: redirecting sector 66064 to another IO path
Dec 29 15:39:55 cssrvfe01 kernel: multipath: only one IO path left and IO error.
Dec 29 15:39:55 cssrvfe01 kernel: multipath: cciss/c2d2p1: rescheduling sector 66104
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 06:54 PM
01-03-2007 06:54 PM
Re: MSA500 G2 and multipath
it is just a log.
regards, Cristian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 07:00 PM
01-03-2007 07:00 PM
Re: MSA500 G2 and multipath
mdadm -C /dev/md0 --level=multipath --raid-disks=2 /dev/cciss/c1d1 /dev/cciss/c2d1
Then run fdisk to partition /dev/md0.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 09:14 PM
01-03-2007 09:14 PM
Re: MSA500 G2 and multipath
i try with /dev/cciss/c1d3 and /dev/cciss/c2d1 to create /dev/md3.
and it fails when i try to mount /dev/md3
instead if i try to use the dd command to write on /dev/md3 it works without errors.
regards, Cristian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-03-2007 09:50 PM
01-03-2007 09:50 PM
Re: MSA500 G2 and multipath
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-04-2007 02:33 AM
01-04-2007 02:33 AM
Re: MSA500 G2 and multipath
the last problem is with the lock lun on md device.
any ideas?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-04-2007 08:27 AM
01-04-2007 08:27 AM
SolutionThis does not significantly affect overall availability. Assume you have a cluster using MD for other devices and single path for LockLUN. If the path to LockLUN fails from one of the servers (call is ServerA), you will get messages in the system log but there is no other affect in the cluster. ONLY if the other server (ServerB) fails before the repair is made to ServerA will you have a problem. There is no guarantee to survive dual failures - although we try. This is dual failure case.
More explaination may be available in this white paper http://docs.hp.com/en/B3936-90078/B3936-90078.pdf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-04-2007 09:30 AM
01-04-2007 09:30 AM
Re: MSA500 G2 and multipath
do you think that this may be a good solution? or not?
do you think that the quorum server may be a preferred solution?
regards, Cristian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-04-2007 10:41 AM
01-04-2007 10:41 AM
Re: MSA500 G2 and multipath
On Quorum Service vs LockLUN. Quorum service is easier to manage in some ways, but it must reside on a computer outside of the cluster. Some customers do not like that. In some cases, failover can be a little faster with Quorum Service. If the server running QS fails, the cluster keeps running (if there are no there failures within the cluster). if that computer does fail, it can be reapired without any impact at all to the cluster.
Since LockLUN is entirely within the cluster there is no extra hardware. If there is a failure on the single path to the LockLUN because of an HBA failure, the repair should be scheduled. In order to do that repair, the node needs to come down so there is some impact since the packages need to be moved over to the other node.
With that information, you should be able to make a decision appropriate for your environment.
Regards