HPE EVA Storage
1826777 Members
1495 Online
109702 Solutions
New Discussion

Re: running os (hpux11i) on SAN disk

 
SOLVED
Go to solution
Kevin_31
Regular Advisor

running os (hpux11i) on SAN disk

I saw an argument against doing this on EMC SAN at this thread:

http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xddfba14d9abcd4118fef0090279cd0f9,00.html

And see that many of the principles apply to ANY manufacturer's SAN disks (I have HP's XP48).

Still, The article Was written at the end of 2000 and I wonder if technology improving or experience/confidence has changed anyone's opinions.

I'd like to boot my boxes accross over my SAN disks, but I admit I've not been in a disaster scenario with my SAN.

Okay, if I have problems with my SAN then I can't boot ALL my vPars, but each server has two internal disks so even without the SAN I could boot ONE vPar on each box using those disks (for diagnostic purposes, booting single-user-mode, etc. as mentioned in the thread I linked to above.

So, please help me to understand, what's the big "NO-NO" about using my increased speed (100 MB/s over fibre vs. 40 MB/s over SCSI) and improved resilience (RAID-5 on SAN, etc.)?

If the SAN's down and I can't get to my databases/applications, (nor my backup devices--come to think of it!) then does it really matter if I can't get to all my vPar hpux instances? It seems like I could make sure my resliency on my SAN (redundant switch, redundant fc controller going into each server/backup device, etc.) is as strong as it should be and the carry on using this technology. I recon' my company has spent enough money to buy and support it, so I wanna get the most use from it as I can.

What's your experience please?
6 REPLIES 6
Vincent Fleming
Honored Contributor

Re: running os (hpux11i) on SAN disk

If properly setup, booting from an XP can work very well. I know of several in production today that have been in production for about a year with no problems. One has some 100 A-class machines booting off of it.

Here's my take on it... IF you boot from your XP, be sure to swap somewhere else. ALWAYS use PV-Links, and a redundant SAN architecture.

Keeping all that in mind, your I/O traffic to your vg00 should be minimal (particularly if running a database - it should be nearly all logging only).

A word of caution... it's best done when there is some significant value-add in doing so - for example, the site I mentioned above with all the A-class machines - they image ALL the machines to be exactly the same - hardware, software, etc. If one dies, they swap it out and boot the new one off the array - very nice. They keep a "golden image" of the OS on the array, and use Business Copy to add new servers. They can also test-install patches, and propagate them using Business Copy.

So, they added significant value because of their operating environment. How much value will your booting off the array give you?

Good luck!
No matter where you go, there you are.
harry d brown jr
Honored Contributor

Re: running os (hpux11i) on SAN disk

Kevin,

You are right, that SAN disk via fibre will be much faster than scsi disks, and with replication you can further reduce your down time if you are replicating to another SAN with like servers on the other side.

And you are right, it's exactly what I've been saying: When the SAN is down, my servers are useless anyways!

As you already know, I'm using vPars in my LAB, and I'm working with my SA group to implement booting 100+ servers from our EMC SAN (connectrix + Symm's). I'm also going to SRDF (replicate) the ROOT disks to the other remote SAN and I will have vPar servers there waiting to be used if necessary for disaster recovery!

I've talked to EMC and HP and their concern is the amount of IO to the ROOT disk and my argument is that I have DataBases that have 100 times more IO activity than any of my root volumes, so what's their beef? In my opinion, their lack of enthusasm of booting off the SAN is purely based upon ignorance. In the EMC LAB's they boot their HP, SUN, and NT servers from their SAN, so why the roadblocks I ask them, they respond back "well you can if you submit and RPQ and visit our LAB's".

SO I will be doing so soon!

live free or die
harry
Live Free or Die
Ajay Sishodia
Frequent Advisor

Re: running os (hpux11i) on SAN disk

Kevin,

I read almost all the threads on this forums when I were considering booting from EMC. And they all have a big NO NO on this. But I agree with Harry when the SAN is down my servers are not doing much anywasys so it don't matter if they were up or not. And as you said each have a internal primary boot. Anyways, I now have 4 vpar machines booting from EMC. I do not see any big issues with this, except that if you do have a problem with external boot, support from HP is not so great if you don't have a supported configuration. (In my case the fibre switch is not supported by HP).

HTH
Ajay
MAHMOUD
Occasional Contributor
Solution

Re: running os (hpux11i) on SAN disk

The hp support includes internal disks replacement you pay for it.

The SAN cost money, put the maximum of valuable data on it.
Kevin_31
Regular Advisor

Re: running os (hpux11i) on SAN disk

A followup to this thread, in case anyone else is considering SAN-boot (diskless boot) of their HPUX servers.

We've got these running on our vPars/production servers for every department now.

Like anything, it's slightly more complicated, meaning you need to be confident and competent, but the pay-off is worthwhile.
The only outage on any of my servers since posting the original message above was down to an internal disk failure. If that server had been booted off it's SAN disk it wouldn't have failed.

Anyway, after running this setup for a while, I can say I'm glad we took the risk and I'd suggest the same to anyone else.

(Next the Windows guys here will look to get their servers booting from SAN)
Mike Naime
Honored Contributor

Re: running os (hpux11i) on SAN disk

I have about 150 VMS servers that use all SAN drives. Identical Redundant fabrics. No internal disk drives at all. We are running the Oracle DB with our App. We started booting from the SAN sometime in 99. (I do not remember when we started)

One big difference is that now your system lives in your SAN storage, and not in your server! This is a very different way of thinking about your systems from the past.

A big plus from this is that I can minimize client downtime in case of a hardware failure. I move the functionality to another server, and diagnose/fix the problem on my own or HP's time.

Using the SAN drives allows me to perform what I call a "hardware move". When I have a system that dies for some reason. A Failed CPU, Memory problems, Fried power supplies... Etc. I can move the functionality of that system from one server to another server in about one hour. Less time if I can prep for it.

I have to change zoning. Identify the new connections (WWIDS) Enable the connections to the appropriate LUNS. Set the boot LUN on the server. Move the network cabeling (The one physical connection that I have to change) and Boot!

In one instance, we pre-zoned, identified and enabled everything. I was then able to shutdown one server, moved (Re-patched) the network cable, and was booting within 2 minutes of the old server shutting down.

Whoever is recommending that you not boot from the SAN is probably the same kind of person that tells you not to get out of bed today because you might stub your toe, or that Skylab could fall on your head! Or, you could be struck by lightning!

In one HSG80 storage controller, I have 10 non-prod systems all booting from that same storage controller.

In my Data Center with Redundant power feeds from 2 power grids, Backup generator, Redundant power conditioners feeding redundant PDU's in the rack to redundant internal power supplies... If my SAN is down, then most likely we have other larger problems to worry about, and my Servers and network gear will not boot anyways!

with over 3000+ SAN spindles, we have had less than 100 drive failures in the last 3 years. With hardware mirror/raid in the storage controllers, this is a non-event. A spareset drive failes in and I can deal with it the next day.

VMS SAN mechanic