Disk Enclosures
cancel
Showing results for 
Search instead for 
Did you mean: 

Horcm madness

Olivier Masse
Honored Contributor

Horcm madness

Hi,

We've been using HORCM during the last two years to manage our BCs and CAs. But it wasn't my job to keep it up to date up until recently. I'm starting to work with it the hard way and feel disgusted. Those who know this tool might understand me. We have a bunch of horcm instances spread out on at least two dozer servers, and it's a nightmare to keep them updated correctly.

Anybody out there has best practices for HORCM that you would like to share?

For example, is it possible to centralize all the horcm instances on as few servers without having to zone all the disks on them?

And apparently, it is possible to the pairing in Command View, which might be asier. Is there an API that I could use to talk to the XP directly to manage such pairs, bypassing horcm?

Thanks
2 REPLIES
Devender Khatana
Honored Contributor

Re: Horcm madness

Hi,

We also have some three XP's at my site and each of them have two BC's. The only thing we require to do is to keep HORCM manager running on all the hosts where these BC's are mounted. And it do not really takes much efforts only need to start it once after server reboot.

What are your problems in achiving this task ? As we also have HORCM running on as many as atleast 6 servers and have no problems so far (Atleast last 3 years)

You can not centralize all HORCM instances. It has to run on the servers where BC's are required to be mounted.

HTH,
Devender
Impossible itself mentions "I m possible"
Olivier Masse
Honored Contributor

Re: Horcm madness

Hi Devender,

Our environment is big and changes constantly. The problem is that as soon as we add just one disk to one of our main production VGs, we need to keep updated the CA, the backup BC, and all the pairs used for various refreshes that ultimately depend on it. This requires lots of editing of horcm tables, and running raidscan quite a lot.

We have a lot of pairs, and each counts at least 40 disks. We have 90 horcm instances on 31 production and development servers.

It's becoming hard to manage, as it requires a lot of input and it's prone to ertrors.

Thanks