Array Setup and Networking
1753762 Members
5057 Online
108799 Solutions
New Discussion юеВ

how to mount new created volume to Linux Server

 
SOLVED
Go to solution
awais290125
New Member

how to mount new created volume to Linux Server

Hi,

We have a nimble storage in our company. They created a volume for a Linux Server. Please suggest how to mount new created volume to Linux Server.

5 REPLIES 5
galkun27
Occasional Advisor

Re: how to mount new created volume to Linux Server

Thank you for your question!  Are you running a Fibre Channel or iSCSI array? There are best practices for both on Infosight, but I would be happy to forward the appropriate one to you based on your environment. 

Not applicable

Re: how to mount new created volume to Linux Server

It's a ISCSI...

ggawrych84
New Member
Solution

Re: how to mount new created volume to Linux Server

One of our very Linux-savy SE's put together a collection of install guides for various version of Linux. Below are the content for CentOS / RHEL 6. I used these steps during my last Linux installation and they worked nicely.

1) Configure Ethernet Interfaces to be used for iSCSI data

     Set MTU to 9000 if jumbo desired

          edit /etc/sysconfig/network-scripts/ifcfg-ethX (X is interface number)

          DEVICE=eth1

          BOOTPROTO=static

          BROADCAST=10.10.50.255

          IPADDR=10.10.50.101

          NETMASK=255.255.255.0

          NETWORK=10.10.50.0

          ONBOOT=yes

          MTU=9000

     Restart networking after changes

          #/etc/init.d/network restart

     Run ifconfig and make sure newly configured interfaces are visible and MTU is 9000

          #ifconfig

2) Tune kernel parameters to resolve IP ARP flux

     edit /etc/sysctl.conf and add:

     #IP ARP flux (make sure to change eth0 and eth1 to adapters dedicated to iSCSI)

     net.ipv4.conf.eth1.arp_ignore = 1

     net.ipv4.conf.eth2.arp_ignore = 1

     net.ipv4.conf.eth1.arp_announce = 2

     net.ipv4.conf.eth2.arp_announce = 2

     net.ipv4.conf.eth1.rp_filter=0

     net.ipv4.conf.eth2.rp_filter=0

     Reload kernel param file

          #sysctl -p

     Test Jumbo and IP ARP

          #ping тАУs 8972 тАУM dont тАУI eth1 {iSCSI discovery IP}

          #ping тАУs 8972 тАУM dont тАУI eth2 {iSCSI discovery IP}

3) Install Required Software Packages

     sg3_utils - this package contains utilities that send SCSI commands to devices

     device-mapper-multipath - provides I/O fail-over and load-balancing within Linux for block devices

     iscsi-initiator-utils - iSCSI daemon and utility programs

          #yum install sg3_utils device-mapper-multipath iscsi-initiator-utils

     Set iscsid and multipathd to startup at boot

          #chkconfig тАУlevel 345 iscsi on

          #chkconfig тАУlevel 345 multipathd on

4) iSCSI timeouts

     edit /etc/iscsi/iscsid.conf

     node.session.timeo.replacement_timeout = 10

     node.conn[0].timeo.noop_out_timeout = 10

5) Create iSCSI ifaces

     #iscsiadm -m iface -I iSCSI1 --op=new

     #iscsiadm -m iface -I iSCSI2 --op=new

     #iscsiadm -m iface -I iSCSI1 --op=update -n iface.net_ifacename -v eth1

     #iscsiadm -m iface -I iSCSI2 --op=update -n iface.net_ifacename -v eth2

6) MPIO config

It is important to blacklist any disk that you do not intend to multipath (i.e. the hosts internal hd).

To determine currently connected disk run

     #fdisk -l

Edit /etc/multipath.conf (if the multipath.conf file is not located in /etc copy from

     /usr/share/doc/device-mapper-multipath-0.X.X/multipath.conf /etc)

          Insert:

               defaults {

                    user_friendly_names yes

                }

                blacklist {

                devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"

                devnode "^hd[a-z]"

                devnode "sda$"

                 }

                 devices {

                             device {

                                        vendor "Nimble"

                                        product "Server"

                                        path_selector "round-robin 0"

                                        features "1 queue_if_no_path"

                                        path_grouping_policy group_by_serial

                                        path_checker tur

                                        rr_min_io_rq 1

                                        failback immediate

                                        rr_weight priorities

                                        no_path_retry 20

                              }

               }

     Restart multipathd after making changes to multipath.conf

          #/etc/init.d/multipathd restart

7) Collect the hosts initiator name

     #cat /etc/iscsi/initiatorname.iscsi

Use initiator name from previous step to create iscsi initiator group on Nimble array

8) Discover iSCSI targets

     #iscsiadm тАУm discovery тАУt st тАУp {iSCSI discovery IP}

     This should discover all paths to target volumes

     When iSCSI host connection method is set to manual = # of host side NICs x # of array side NICs

     When iSCSI host connection method is set to auto = # of host side NICs

9) Login to iSCSI targets

     #iscsiadm тАУm node тАУtarget {insert target iqn from previous step} тАУlogin

10) Run #multipath тАУll

     mpath0 (210d50ed0e73844c96c9ce900c8609e4a)

     [features="1 queue_if_no_path"][hwhandler="0"]

     \_ round-robin 0 [prio=3][active]

     \_ 5:0:0:0 sdc 8:32 [active][ready]

     \_ 4:0:0:0 sdb 8:16 [active][ready]

     \_ 6:0:0:0 sdd 8:48 [active][ready]

     This should show your device/paths as active and ready

     Take note of the mpathX ID. This will be used to format/mount your multipath disk

Steps 11-15 are Global buffer settings and can be run later if needed for any performance tuning

16) Create filesystem and mount disk (ext4 nonLVM) (change mpathX to the proper multipath disk ID)

          #mkfs.ext4 /dev/mapper/mpathX -b 4096 ##Nimble volume with 4k block

          #mkfs.ext4 /dev/mapper/mpathX -b 4096 -E stride=2,stripe-width=2 #Nimble volume with an 8k block

     Create mount point

          #mkdir /volumeName

     Mount new volume

          #mount /dev/mapper/mpathX /volumeName

     Execute тАЬdf -hтАЭ to display newly mounted volume and usable space

     Add new volume to /etc/fstab file so it is mounted on reboot

          edit /etc/fstab

               /dev/mapper/mpath0 /volumeName ext4 _netdev,noatime,nodiratime,barrier=0 0 0

17) Create filesystem and mount disk (ext4 with LVM) (change mpathX to proper multipath disk ID)

     Create your physical volumes. NMBL recommends 1 volume for every 2 cpu cores allocated to your host.

     After targets have been discovered and are logged in. Initialize those volumes for use with NMBL the logical volume manager.

          #pvcreate /dev/mapper/mpathb /dev/mapper/mpathc тАж

     Create the volume group.

          #vgcreate vg01 /dev/mapper/mpathb /dev/mapper/mpathc тАж

     Create the logical volume (default extent size is 4MB).

          #lvcreate тАУl <number of extents> -I 8 тАУI 4096 тАУn vol1 vg01

     Create the filesystem on vol1

          #mkfs.ext4 /dev/vg01/vol1 -b 4096 -E stride=2,stripe-width=16

     To mount at boot edit the /etc/fstab and add the following

          /dev/vg01/vol1 / mountpoint ext4 _netdev,noatime,nodiratime,barrier=0 0 0

awais290125
New Member

Re: how to mount new created volume to Linux Server

Thanks for your time and it helps me to mount the volumes...thanks

ggawrych84
New Member

Re: how to mount new created volume to Linux Server

Glad to help - all credit goes to Matt Campbell and those extremely helpful guides.