System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

How to increase the filesystem which is under redhat cluster directly?

 
SOLVED
Go to solution
Highlighted
Super Advisor

How to increase the filesystem which is under redhat cluster directly?

Hello Experts,

I got an activity for increasing the size of the file system on RHEL 5.2

/dev/mapper/xyz1 179G 59G 111G 35% /xyz

1) As per my observation, the server is configured with redhat cluster, if this mount point is down, it will be fail over to the alternative node.
2) /xyz is not in fstab, I can understand that, this is controlled by cluster
3) The storage is form the centralized HP storage, I can recognize these luns via command multipath –l ( On both the node )


• In case, if I want to increase the file system size, it’s possible to do it online? After increasing the lun from the storage end.
• Since it’s cluster with other node, what are the changes to be made at the other server?
• Do have any other method to carry out this activity, please through some light on this of your experience
13 REPLIES 13
Highlighted
Honored Contributor

Re: How to increase the filesystem which is under redhat cluster directly?

Your anonymized device name /dev/mapper/xyz1 suggests this might not be a LVM PV, but a traditional PC-style partition. If that's true, you can extend partition /dev/mapper/xyz1 online only if no other partitions exist on the LUN.

If you don't have LVM, you will have to edit the LUN's partition table while the LUN is mounted, which is always a little bit scary to me.

With LVM, it would be possible to add a completely new LUN to the clustered volume group, which would be safer than extending the existing LUN.

Anyway, the procedure would be something like this:

1.) Extend the LUN in the storage system.

2.) Run "partprobe" on both nodes and see if the nodes can detect the new LUN size. If the new size cannot be detected, you may have to reboot the cluster.

3.) Go to the node that has the filesystem mounted, and extend the partition with parted/cfdisk/fdisk/your favorite partitioning tool.
Be *extremely* careful with this step!

4.) Run "partprobe" on both nodes and verify the nodes can see the new partition size (using "cat /proc/partitions" or whatever)

5.) To make the filesystem use the new space, run the filesystem resizer tool appropriate for your filesystem type.

For example, if it's an ext3 filesystem, run:

resize2fs /dev/mapper/xyz1

6.) Done!

MK
MK
Highlighted
Exalted Contributor

Re: How to increase the filesystem which is under redhat cluster directly?

Shalom,

I recommend adding a new lun to expand file systems. It avoids reboots, as sometimes Linux has trouble recognizing a size change on a LUN.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Highlighted
Super Advisor

Re: How to increase the filesystem which is under redhat cluster directly?

But, how can it help in this case.

existing mountpoint is not under lvm :(

Experts,

I have a very strong doubt,

Multipath -l shows me the wwn number, but it got converted to /dev/mapper/xyz1 with LVM?

will it be from CLVM? (cluser lvm)
i could see, some rpm is installed as clvm, but i am not aware of configuration and commands.

* It's on red-hat cluster on both the node.
Highlighted
Honored Contributor
Solution

Re: How to increase the filesystem which is under redhat cluster directly?

> Multipath -l shows me the wwn number, but it got converted to /dev/mapper/xyz1 with LVM?

/dev/mapper/xyz1 does not look like a LVM device name. A LVM device name usually has two parts: the first is the Volume Group name, the second is the Logical Volume name. They combine together like this:

/dev/mapper/-

A traditional PC-style partition on a multipathed device is normally named like this:

/dev/mapper/p

Is "xyz1" the real name of the device, or have you censored some information? I understand the requirement to keep the actual filesystem names secret, but the name format includes some important details about how the filesystem is set up.

It might be that the system has been configured to add a custom name "xyz1" for the multipath device - if this is the case, the association between the WWID and the name "xyz1" is configured using either /etc/multipath.conf or /var/lib/multipath/bindings.

If that's true, there might be neither LVM nor a partition table on the LUN, just a filesystem on the whole-LUN device. This is not a very good practice, as it makes storage migration rather more difficult, but the filesystem expansion should be relatively simple:

1.) extend the LUN from the storage end
2.) run partprobe on all nodes, verify that they all see the new size of the LUN
3.) run the filesystem extension tool (resize2fs or whatever is appropriate for the filesystem type) on the node that currently has the filesystem active.
4.) complete!

If you're still uncertain, run "pvs": if it does not list /dev/mapper/xyz1, then xyz1 is not a Linux LVM physical volume (neither CLVM or not).

Then run "dmsetup ls --tree".
If /dev/mapper/xyz1 is just a whole-LUN multipath device, it should display as a two-level hierarchy like this:

xyz1 (253:)
|- (:)
\- (:)

The numbers within the parentheses are major:minor device number pairs. If the major number is 8, they are /dev/sd* devices.

If there is a partition table or LVM is used on the LUN, there will be a three-level hierarchy:

xyz1p1 (253:)
\- xyz1 (253: |- (:)
\- (number>:)

i.e. the top level is the partition, the second level is the whole-LUN multipath device, and the third level includes all the individual /dev/sd* paths to the LUN.

MK
MK
Highlighted
Super Advisor

Re: How to increase the filesystem which is under redhat cluster directly?

Hey MK,

Amazing. I wish to give you more points ï Unfortunate, no option for it. I really wonder about your ability.

BTW, let me come to the point.

Storage: HP MSA 2000
OS:- RHEL 4.2

I get the output as follows.

xyzp1 (253:20)
ââFERSIT (253:18)
ââ (8:80)
ââ (8:32)
abcp1 (253:19)
ââFERDBS (253:17)
ââ (8:64)
ââ (8:16)

So, I understand that, the LUNs are under some manager control, but still I am unable to dig it, how it got converted from LUN to /dev/mapper/***

I verified it very well that, itâ s not from local LVM ( as far I can see it in LVS/VGS commands )


Option 1)
Thank you so much for your opinion for expanding volume at storage and OS. But users feel that extend Fs process on the MSA is very very slowly, and risk involved.

Option 2)
Preset the additional LUN and move some data to the new volume.

In such case, can you explain how we can bring the new disk under cluster control? I am able find only file /etc/cluster/cluster.conf which is related to cluster setup.
Highlighted
Honored Contributor

Re: How to increase the filesystem which is under redhat cluster directly?

xyzp1 (253:20)
\- FERSIT (253:18)
|- (8:80)
\- (8:32)

The top level looks like a partition (...p1), then there is a whole-LUN multipath device FERSIT, and finally two /dev/sd* paths to the LUN.

(8:80) is /dev/sdf and (8:32) is /dev/sdc.
Run "ls -l /dev/sd*" to see the major and minor numbers of disk devices. These numbers are systematic and will use the same numbering sequence in most Linux systems.

> but still I am unable to dig it, how it got converted from LUN to /dev/mapper/***

Look into /etc/multipath.conf. Is there something like this?
-----
multipaths {
multipath {
wwid
alias FERSIT
}
}
-----
That would assign the custom name /dev/mapper/FERSIT to a multipath device with a given WWID.

The same effect can be achieved by editing the /var/lib/multipath/bindings file, but /etc/multipath.conf is the recommended way to do it.

In /var/lib/multipath/bindings, the name would be specified simply like:

FERSIT

Now we know that the name of the whole-LUN device is /dev/mapper/FERSIT. The next step is to verify its partition table. Please run this command:

fdisk -l /dev/mapper/FERSIT

I still don't know why the partition-level name in the dmsetup listing was "xyzp1": the default name would be FERSITp1. (Or if you were trying to censor something, I think I just managed to reconstruct the censored information :-)

In general, the system builds up a device like this in the following sequence:

1.) The SAN HBA driver detects the LUNs visible to each HBA and presents them as /dev/sd* devices. Each path to the LUN gets its own device node: this layer does not care about multipathing at all.

2.) The dm-multipath subsystem detects that /dev/sdc and /dev/sdf both have the same WWID, and sets up a multipath device for it. The multipath configuration includes a custom name FERSIT for the multipath device, so the multipath device is named /dev/mapper/FERSIT.

3.) As the new multipath device is created, it causes an udev event. The udev/hotplug subsystem will detect that /dev/mapper/FERSIT seems to contain a partition table, and automatically runs "kpartx -a /dev/mapper/FERSIT" to create corresponding multipath devices for each partition found.

The default name for the first partition on /dev/mapper/FERSIT would be /dev/mapper/FERSITp1; if this partition device has a different name, there is probably an udev rule somewhere in /etc/udev/rules.d/ that assigns the custom name to the partition. Run "grep FERSIT /etc/udev/rules.d/*" to find it.

> But users feel that extend Fs process on the MSA is very very slowly, and risk involved.

You probably mean the LUN extension process? The MSA does not care about the filesystem: it extends only the raw LUN. The filesystem extension is done by the "resize2fs" (or whatever) command on the host.

Nevertheless, your users may be right. A more common strategy would be to present a new LUN from the MSA. If LVM was used, it would have been easy to add the new LUN to the existing VG, and then extend the existing LVs as necessary.

> I am able find only file /etc/cluster/cluster.conf which is related to cluster setup.

Yes, that is the only configuration file for the RedHat Cluster Suite (at least for RHEL 5.x; the Cluster Suite for RHEL 4.x might have been different). See the RedHat Cluster Suite documentation.

--------

I would recommend RedHat's "RedHat Enterprise Clustering and Storage Management" course for you. The course code is RH436. It includes 4 days of lectures and labs, and optionally a certification exam (code EX436) on the 5th day.

I sat the course in last November, and it covered much of the information required to answer your questions in this thread. But if you want to do the EX436 exam, you must get the basic RHCE certification first: you cannot participate in the exam unless you already have a RHCE certification.

MK
MK
Highlighted
Super Advisor

Re: How to increase the filesystem which is under redhat cluster directly?

Perfect MR KK. Thanks for you inputs as always.

I have few thing need to be clarified.

Current setting:

2 nodes are in redhat cluster, each node has one mountpoint and available in cluster configuration.

Node1: /XYZ
Noce2:/ABC

New mountpoint: /ABC1

Here is the request for one more mountpoint and need to be added as part of cluster resource.

1) Can I edit /etc/cluster/cluster.conf with freezing cluster service?
2) I have attached the txt with new configuration added in /etc/cluster/cluster.conf, please see, if the changes are OK.
3) Should I need to restart cluster service after editing configuration file?


Note: I can get a time window for this activity.
Highlighted
Super Advisor

Re: How to increase the filesystem which is under redhat cluster directly?

minor change.

Can I edit /etc/cluster/cluster.conf without freezing cluster service?
Highlighted
Honored Contributor

Re: How to increase the filesystem which is under redhat cluster directly?

1.) You should never edit the cluster configuration file directly when the cluster is running.

Instead, make a copy of the cluster configuration file:

cp /etc/cluster/cluster.conf /tmp/cluster.conf.new

Make your changes to the new copy:

vi /tmp/cluster.conf.new

Increment the configuration version number (on the 2nd line of the cluster.conf file)

... config_version="18" ...
=>
... config_version="19" ...

Then run this command to apply the update to all cluster members simultaneously:

ccs_tool update /tmp/cluster.conf.new

2.) You should not re-use the same fsid number with two different filesystems.

(The fsid is essentially a random number that is important if the filesystem is exported over NFS. If two exported filesystems have the same fsid number, the NFS clients might get confused. If you aren't NFS exporting the filesystem, it's not so big deal, but it's a bad practice to create duplicate fsids.)

Other than that, it looks OK to me.

3.) The safest choice would be to stop the service before doing the "ccs_tool update" command, then restart the server and do a test failover, so that you have positive proof that the new configuration works.

It *may* be possible to add the filesystem without stopping & restarting the service, but I would not recommend it.

MK
MK