1820531 Members
2312 Online
109626 Solutions
New Discussion юеВ

lvchange -r N

 
Prashant Zanwar_4
Respected Contributor

lvchange -r N

Can i do this online for a lv under cluster? Will it cause any prob for cluster? I guess not !
Thx
Prashant
"Intellect distinguishes between the possible and the impossible; reason distinguishes between the sensible and the senseless. Even the possible can be senseless."
3 REPLIES 3
TwoProc
Honored Contributor

Re: lvchange -r N

I don't think it will cause you a problem. I was able to change it on a test machine's /tmp drive with no problem which it was still mounted. So, I can't see that it's going to do anything to the cluster in anyway. I guess I could get spooked by what it will do with bad blocks that it may have "already reallocated". Will it go back to using the original bad blocks if it has already remapped some? I kindo of doubt it - but I can't tell by reading man pages on it. Maybe someone else has more info on that.
We are the people our parents warned us about --Jimmy Buffett
Florian Heigl (new acc)
Honored Contributor

Re: lvchange -r N

at first: It works online w/o problems.
regarding John's post:

Yes, this is right, the block list will NOT be reset at this point. I luckily haven't had to go through this in real life yet, but there are a few threads on this behaviour and solutions here on ITRC, though they might date back a year or two.

what I remember:
if it turns out You have inaccessible blocks out of having first used -r y/n, call in HP's competency center.
The can reset the block maps and verify that everything is where it belongs and that You can reach it.
Also rumors said that there is a tool/script to handle these things.
I do not know it it's one of the CCS-Tools in /usr/contrib/bin or something else. These tools are intended for the people that can also get the job done without them and are too arcane/risky for beginners like me to play with :)
yesterday I stood at the edge. Today I'm one step ahead.
Silver_1
Regular Advisor

Re: lvchange -r N

Hi,

I have done in 8 cluster nodes without any problems.
TX