StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Re: Moving CSV between nodes unreliable


Moving CSV between nodes unreliable



I'm having a lot of problems with a 4-node P4300 (2x 7.2 Starter SAN). If I try and move a CSV between nodes it just sits at 'Offline Pending' for about 5 minutes, then changes the status to failed. If I just leave it, sometimes it will do nothing and stay in the failed state. Other times it will then change the owner and set it to 'Online Pending' for a while, then change to 'Failed' again. The third time it tries it usually brings it online. Sometimes it requires a full reboot of the cluster.


During this whole process causes all the Hyper-V VMs to freeze and it causes a lot of distruption. I attached a second LUN as a test and this moved back and forth no problem.


The volume is 5+ TB volume with 30ish VMs on it. I was advised that this used to be a problem in the past but not any more. Is this true? Would it be a better idea to chop it into smaller volumes?


Any tips on what to do would be great. It's a two node 2008 R2 cluster that passes all validation tests.





Honored Contributor

Re: Moving CSV between nodes unreliable

I could guess it has something to do with the size of the LUN.  5+TB is rather large.  Why do you want all your VMs in one LUN?  Unless there is a need for it, I personally try and keep each of my LUNs 1TB or smaller.  Think about the rebuild time from a DR standpoint if you have to recover that complete LUN!


Re: Moving CSV between nodes unreliable

When I explained to the engineer who installed it that all the storage was going to be used purely for VMs he just said I may as well use it all as one great big volume. He said that there used to be issues doing that in the past but they had all been resolved... it got me a little concerned but I trusted his judgement.


The LUN restoration point isn't something I'd given much thought to but it makes a lot of sense. I'll migrate all my VMs to an alternate SAN, chop up the LUNs a bit, and migrate back.


Thanks for the reply.