cancel
Showing results for 
Search instead for 
Did you mean: 

New mount point

 
Highlighted
Occasional Contributor

New mount point

Hi all,

I am using HPGL580 (x2) + MC CX3-40 + Red Hat Enterprise 4 + Veritas Cluster Service 5 +
VxVM + Naviphere 6
A Raid Group with LUN is configured by engineer before. Moreover, rhere are
free SAN disks in EMC,
how can I use these free SAN disk and create a mount point in VCS.
E.g. am I need to create another RAID group (how?), create a new LUN (how?)
how Linux recognize the new raid group/LUNs
how to add to cluster with VxVM and VCS?

Thanks for your kindly help!

5 REPLIES 5
Highlighted
Honored Contributor

Re: New mount point

Hi,

Whilst someone here might be able to help, you'll probably be better off asking that question in an EMC forum, rather than HP.

Cheers,

Rob
Highlighted
Honored Contributor

Re: New mount point

which EMC array are you using here? Which s/w are you using to manage that LUN?

The free SAN disk you see from storage level can be asigned to the required host.Then you can manage that disk in VCS/LINUX as you do for a normal internal disk

How to create a RAID group or a new LUN will depend on the array.

How Linux recognize the new /LUNs depends on which s/w and whihc FC card you use.
Linux does have only access at LUN level and it is not transparent to RAID group(storage level)
Highlighted
Occasional Contributor

Re: New mount point

Hi all,

Sorry that I cannot find the EMC offical forum and no response from EMC non-offical forum. I try any expert can help me, as it's urgent. Sorry for causing any inconvenience.

- Use EMC CX3-40
- Use EMC Naviphere 6 to create RAID group and LUN
- After the LUN is created, is there any configure/setup for Linux recognize the new LUN?
- Can one RAID group has more than one diskgroup?

Thanks again.

Highlighted
Exalted Contributor

Re: New mount point

Shalom,

Since HP doesn't sell EMC there is no EMC forum. There is however a disk and storage category that might be more helpful.

First make sure on the EMC side the World Wide Name (WWN) associated with the LUN is the WWN of the fiber card involved.

Usually storage recognition occurs after a system reboot.

Regards,

Shmuel
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Highlighted
Honored Contributor

Re: New mount point

After the LUN is created, is there any configure/setup for Linux recognize the new LUN?

There are different procedures depnding on the type of FC you use for interconect.Example qlogic and Emulex HBA has differnet procedures.

EMulex

APPENDIX B - How to add SAN disk(s) manually to a Linux server having Emulex HBA(s)
[edit]
1. Assign LUN(s) to server /Storage Team/
[edit]
2. (Re)Scan SCSI devices
# /usr/sbin/lpfc/lun_scan all
[edit]
3. Create PowerPath pseudo device
# powermt config
[edit]
5. Create partitions / filesystems on the device

Qlogic

APPENDIX A - How to add SAN disk(s) manually to a Linux server having QLogic HBA(s)
[edit]
1. Assign LUN(s) to server /Storage Team/
[edit]
2. (Re)Scan SCSI devices
# echo "scsi-qlascan" > /proc/scsi//
(qlogic driver will re-scan)
â ¢ Where can be either one : qla2100/qla2200/qla2300
â ¢ is the instance number of the HBA.
e.g.:
# ll /proc/scsi/qla2300/

total 0

-rw-r--r-- 1 root root 0 Jul 13 08:43 1
-rw-r--r-- 1 root root 0 Jul 13 08:43 2
crwxrwxrwx 1 root root 253, 0 Jul 13 08:43 HbaApiNode

# echo "scsi-qlascan" > /proc/scsi/qla2300/1
# echo "scsi-qlascan" > /proc/scsi/qla2300/2
[edit]
3. Build the device table entry for the new device
# echo "scsi add-single-device 0 1 2 3" >/proc/scsi/scsi
(scsi mid layer will re-scan)
Where "0 1 2 3" is replaced by your "Host Channel Id Lun".
e.g.:
Clariion Configuration there are 4 links to a single LUN (with 2 cards)
â ¢ Hosts: 1 2 (With 2 card configuration)
â ¢ Channel: 0
â ¢ Id/Lun: you can check it from /proc/scsi/qla2300/1 and 2
# cat /proc/scsi/qla2300/1 (and check 2 too)
...
SCSI LUN Information:
(Id:Lun) * - indicates lun is not registered with the OS.
( 0: 0): Total reqs 5908651, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 1): Total reqs 10635, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 2): Total reqs 1788498, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 3): Total reqs 10638, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 4): Total reqs 525958, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 5): Total reqs 10640, Pending reqs 0, flags 0x0, 0:0:81,
( 0: 6): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:81,
( 1: 0): Total reqs 9367, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 1): Total reqs 7322310, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 2): Total reqs 9351, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 3): Total reqs 698219, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 4): Total reqs 9356, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 5): Total reqs 15001, Pending reqs 0, flags 0x0, 0:0:82,
( 1: 6): Total reqs 0, Pending reqs 0, flags 0x0*, 0:0:82,
Disks marked by * are new to the system.
# echo "scsi add-single-device 1 0 0 6" >/proc/scsi/scsi
# echo "scsi add-single-device 1 0 1 6" >/proc/scsi/scsi
# echo "scsi add-single-device 2 0 0 6" >/proc/scsi/scsi
# echo "scsi add-single-device 2 0 1 6" >/proc/scsi/scsi
[edit]
4. Create PowerPath pseudo device
# powermt config
[edit]
5. Create partitions / filesystems on the device