1753937 Members
9232 Online
108811 Solutions
New Discussion юеВ

Make the move to SGLX

 
SOLVED
Go to solution
Brem Belguebli
Regular Advisor

Make the move to SGLX

Hi,

We are studying a move to SGLX for some of our SG HPUX packages.

I got the doc for SGLX 11.18, and I'm a bit surprised of some unsupported things.

1) SGLX doesn't support native DM multipah Linux driver.

2) SGLX doesn't support MD for FC storage (except for MSA500) !

Does anyone know why ?

We have been testing the Linux LVM2 miroring thing, and faced quickly the problem of the lack of exact mapping when more than 2 PV's are part of a VG, and switched back to MD to miror the low level disks, create PV's, VG's and LV's on top of these disks to be deterministic.

What if we used VxVM combined to SGLX, any support issue ?

Brem
8 REPLIES 8
Steven E. Protter
Exalted Contributor

Re: Make the move to SGLX

Shalom Brem,

The support issues are probably due to lack of testing. What is supported is a small universe compared to what will work on SGLX

That being said, HP wants large enterprise environments running on HP-UX. Its much more reliable that Linux and HP has really good HP-UX support.

Speculation of course.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Brem Belguebli
Regular Advisor

Re: Make the move to SGLX

Salam Stephen,

Thanks for your reply.

I understand what you are assuming, but as IT manager I'm supposed to provide the best solution At The Best Price.

We have a lot of SG/HPUX machine (may be around 200 nodes from RX4640 to a few dozen of RX8640), but for some needs, when the price is over, for one machine (without the database licences), something like a few hundred thousand Euros, there is always this little voice thing that teases me with the Linux alternative.

There should always be some cases where large Integrity machines (more than 32 cores) running HP-UX is the only viable solution.

I don't know if there are any persons from the SGLX development team participating to this forum, but it would be great to have their input.

Has anyone at least already implemented SGLX on top of VxVM ?

Brem
Brem Belguebli
Regular Advisor

Re: Make the move to SGLX

Hi,

Sorry Steven for mistyping your name.

I got a few updates on this topic, concerning XDC (and MD, then).

By reading the different docs, it is mentionned that the only SGLX supported solution to soft mirror is XDC. It is also said that it is based on Linux MD.

What is the difference between the OS shipped in MD driver and the one provided by XDC ? We do not expect using MD multipath feature anyway !

Regards

Brem
Serviceguard for Linux
Honored Contributor

Re: Make the move to SGLX

Actually, we do support DM-MPIO. The reason you do not find complete support in the documentation is that HP storage did not have DM-MPIO support until relatively recently. Some details are in the SGLX certification matrix (it is available on this page www.hp.com/info/sglx). Be aware that a new version of the certification matrix is about to be released that will cover a case of persistent LUN naming.

There is also more information in the new Deployment guide http://docs.hp.com/en/14117/sglx.deployment.guide.pdf. A update of this with more MPIO information is coming soon.

There are 2 functions within MD - multipath and SW raid. DM-MPIO and the multipath within the FC drivers eliminates the need for MD for multipath with current array.

There is support of the MD SW RAID with the product "HP Serviceguard Extended Distance Cluster for Linux" (AKA XDC) (http://docs.hp.com/en/T2808-90008/T2808-90008.pdf). Neither MD nor the SW RAID within Device Mapper (DM), which is used with LVM2, have SW RAID that is "cluster safe". (At least this was true the last time we checked.) With XDC we have put cluster safe features on top of MD.

If you really need SW RAID then you can contact your sales person to find out more about XDC, or even get an evaluation copy.
Brem Belguebli
Regular Advisor

Re: Make the move to SGLX

Hello,

Thanks for your reply.

Correct me if I'm wrong:

1) SGLX supports DM-MPIO (native driver in the lastest releases of Linux) without any add-on (docs will be updated soon).

2) Linux native MD driver is not supported for both RAID and multipath (though we did not intend to use this second feature) because of its lack of clusterware. HP recommands to use XDC instead of MD, thanks to its full clusterware support.

My last question was about the support if we used VxVM (with its multipath and mirroring features)?

Brem

Serviceguard for Linux
Honored Contributor
Solution

Re: Make the move to SGLX

1 - Yes but to clarify for others who may read this later. This comment is just for the multipath.

2 - True. But we will still support the MD multipath for MSA500 as documented.

VxVM support - We don't have support now. We will look at that sometime in the future but no commitments at this time. VxVM may require some changes in the package code.

One reason for this lack of VxVM support is that we have not have a huge number of customers using SW RAID. Most customers are just using RAID storage systems. A few are using SW RAID with XDC across HW RAID systems.
Brem Belguebli
Regular Advisor

Re: Make the move to SGLX

Thanks again for the answer.

It is clear now.

As mentionned in my first post, a few standalone nodes (>= RHEL4U5) are now configured with native linux DM for multipathing, MD for RAID 1, and LVM on top of this.

1- The solution to clusterise this will be to replace MD with XDC.

2 - our storage is XP12K/XP24K based !

Concerning VxVM, we will certainly only use Vxfs (ext3 being more generalist, Vxfs providing better perfs for small files) as the volume management is addressed in point 1.

Brem
Brem Belguebli
Regular Advisor

Re: Make the move to SGLX

Thread closed