- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Lessons learned in implenting a Windows NFS server...
Operating System - HP-UX
1819930
Members
3122
Online
109607
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2006 06:37 AM
тАО12-14-2006 06:37 AM
Lessons learned in implenting a Windows NFS server for HP-UX client usage
Lessons learned about HP-UX and using Windows to provide NFS storage for HP-UX clients. (Or, how to spend over 2 years implementing a working NFS storage solution for your network ;-) )
Warning, long message, but hopefully helpful for others that might hit similar issues.
--
First important point:
======================
If you are going to implement a Windows solution for your NFS storage needs do not go beyond Microsoft's Services for Unix version 3.5 (SFU 3.5) -- regardless of which server O/S you run it ontop of. Going to Windows Server 2003 R2 (Win2K3r2 or W2K3r2) and using it's 'new' Services for NFS components (SfNFS) is asking for trouble. You'll likely never get file locking working, and/or if you do, you'll need assistance from Microsoft Support that is not commonly available at this time (meaning you'll be calling for paid support and spending time discussing the issue with Microsoft's lower level techs as they bounce you from support group to support group to get you the help you need).
At the time of this writing (12/14/2006), there's no documented solution to the lack of file locking support for HP-UX clients while you can find documentation that will guide you through making it work on the older SFU product line (version 3.5, 3.0, and further back apparently).
Dave Olker previously provided me the links to the notes on the solution, but they are available from doing a little searching through Microsoft's support area (online knowledge base). Here's the important passages:
Microsoft Knowledge Base article 328858:
http://support.microsoft.com/kb/328858/en-us
Lock Requests With No Authentication Credentials
Server for NFS does not honor lock requests that do not have authentication credentials.
To allow NFS clients to lock files without providing any authentication credentials, Server for NFS has been modified to support advisory locks.
To support advisory locks, set the following registry value to zero to disable mandatory locks:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NfsSvr\NlmNsm\EnableSMBLocking
If you change this registry value to "0" and then stop/start the NFS server, or reboot the Windows system (whichever is easier), then you should be able to lock files on the Windows system.
Second important point:
=======================
Again, stick with SFU 3.5 or older. It should still be available for download from Microsoft even though support for the product has reportedly ended as Microsoft has 'rolled the product' up into Windows Server 2003 R2 and updated the components (and slightly renamed it) as Services for NFS (SfNFS).
Another point:
==============
Don't be afraid to run SFU 3.5 (or older) on Windows Server 2003 R2. R2 is basically W2K3 'patched' and slightly improved. The improvements are very slight, and the behind the scenes stuff on R2 doesn't keep SFU from running on it. Feel free to run SFU 3.5 and use it as you wait for Microsoft to (hopefully) eventually document the proper configuration of the newer components so that they'll properly support HP-UX and other client systems.
Moving on:
==========
Don't try to use NIS on SFU or SfNFS. We tried here (sorry, can't get too detailed on where here is, but some folks might figure it out. Either way, it doesn't much matter as this general tip applies to anyone) and while it 'mostly works' we had no end of grief about maps. The simple username mapping function works fine, but the services map, protocol map and others never seemed to propagate out correctly. We gave up on the idea of using SFU or SfNFS to provide NIS functionality and stuck with NIS running on the HP-UX side. Much easier, much more reliable, and more well known.
Next:
=====
If you are having problems with NFS performance on your HP-UX clients don't overlook the obvious stuff and don't assume that your systems are really telling you the truth. We had a very long period of time where two HP-UX clients performed horribly slow when reading and writing from the Windows based NFS server. All along those clients would use space served on each other and run at normal speeds. Simple network tests (ping, etc.) showed no problems.
landiag showed proper configurations.
sam showed proper configurations.
And yet things weren't working right.
It turned out, for us, that the two problematic HP-UX clients had been set for 100Base-TX/FD *manually locked* Our network support people had instructed us to manually lock the port speed/configuration when running on their managed switches. We had moved to smaller non-managed switches that were behind a firewall device we manage. These non-managed switches were 8 port Gigabit Cisco switches. Once we moved to those ports we should have reconfigured the HP-UX clients to use auto-negotiation but we didn't realize that it would even be an issue.
Flipping the interface to auto-negotiation was literally like flipping on the light switch to get the network up to full speed. We went from literally 4k/sec to 4 - 6M/sec performance. An incredible boost (back to expected speeds).
Use your available resources:
=============================
This is a thank you to the great folks at HP for providing resources like these forums (where notes like this get archived and can be searched in the future for solutions for other folks having similar problems). The same for Microsoft and their online knowledge base. And of course the great people that do speak up in the forums and offer assistance for people that may need more than just a little hand-holding to get through tests, reconfigurations, and more.
Don't buy the marketing hype:
=============================
Getting back to the W2K3r2 and SfNFS issue -- one of the reasons we updated systems here to R2 and went to SfNFS was that there were notes suggesting that R2 would provide significant performance improvements. Hah! Marketing hype (apparently, at least from real world evidence here). We found no significant improvement, and in fact seemed to find some decrease in performance when working with R2 and the SfNFS product. Not that far out of norms, but enough to call b.s. on the claims that R2 and SfNFS was really any better than SFU 3.5
Hardware performance warning:
=============================
One of the things that caught us earlier on performance wise (before apparently killing ourselves with the network issue that affected the performance on the two problem systems) was a vast difference in disk I/O speed on our Windows NFS server versus local disk I/O speed and/or versus HP-UX to HP-UX NFS I/O speed.
There's likely always going to be a performance hit for NFS I/O speed versus local disk I/O. (Remember network overhead that comes into play). But there shouldn't have been such a huge difference in HP-UX NFS server versus Windows NFS server performance.
For us there was a huge performance difference though. On the order of 10 times slower than local disk I/O speed, and roughly 6 times slower when writing to the Windows NFS server. We struggled for a good while on the issue before realizing where the problem was stemming from.
We are using Dell hardware and had originally been working on implementing a Microsoft Clustered Solution. We started back with Windows 2000 Server, moved to Windows Server 2003 along the way (once we realized we thought we wanted clustering), then moved to Windows Server 2003 Enterprise once we hit the roadblock of finding Microsoft wanted more money to really include those features in their O/S.
Once we moved to a clustered solution, versus the single server we did initial testing of SFU on, and started running on it the users of the system began complaining about the disk I/O performance. Once we started doing some simple test measurements (time cp /path1/file1 /remotepath/remotefilename or similar) we started finding the differences in speed. That's part of what pushed us to pursue R2 and SfNFS.
Along the way we realized that using clustering on Dell's hardware meant losing Write-Back caching on the logical disk that was being served from. Once you flip a Dell PERC controller to Cluster mode, you find you can't use Write-Back caching at all. In fact, if the logical drives that are running from the PERC controller were set for Write-Back caching, you can't set the PERC controller to Cluster mode. You must go to Write-Through caching before you can do that.
The difference in performance between Write-Back caching on a typical PERC card versus Write-Through caching is huge. It is literally 6 - 10 times longer for disk I/O speeds with Write-Through versus using Write-Back caching.
If you don't understand the difference between the two, in Write-Through caching disk I/O isn't completed until the data is completely written to disk. There should (in theory) be almost no potential for data loss when doing Write-through caching. In Write-back caching, it's possible that data loss or corruption may occur because your system loses power during I/O operations. It's incredibly important to have your system running on uninterruptible power if you'll be using Write-back caching. It's also important that the users of the system be aware of the potential for data corruption if they choose to go with performance over reliability.
In our case, our users do want performance over reliability and high availability so we've ditched clustering and are using a single server directly connected to an external RAID. If that server fails, we have another server ready to take the connection the external RAID and with a little system admin work, we can bring the system back online in a very reasonable period of time.
Be secure:
==========
Something that set us back severely was a security incident that occured because we were mistakenly under the impression that the network we'd be using was firewall-ed or otherwise restricted/protected in some way.
We had no end of grief in trying to use Zone Alarm or the Integrity Desktop client software on our servers. Even using Windows Firewall on the Windows server caused us grief. All would interfere in someway with NIS, NFS, or our backup system. No matter what we did to allow the various programs that needed access to other systems to get through the firewalls, the software firewalls would interfere.
With all of those problems, and users of the system screaming at us to turn off the d@@@ed firewalls, and the system admins on the HP-UX side telling us to do the same, we did and we got burned.
There wasn't a firewall in place, and almost no other restrictions on the network that the resources were on. We went through an IT incident, lost physical resources and wait to get them back after going through the proper hands, and then had to start all over again.
It was in the starting all over again that we introduced our own hardware firewall device and reconfigured our network (which led to the later performance problems).
But, we did make the systems secure, and learned the hard way that we shouldn't have relied on someone else to provide security (the first layer of defense) for our systems. Not that we'd done that on our regular systems, but we had made the mistakes of ass-u-me-ing on the network with these systems on it. Never again.
I think that about covers things that others would learn from. I hope someone finds the info useful in the future. I can't promise to keep checking back in the ITRC forums, but I should get notification if this topic gets any replies. I'd love to try to help others if they are in similar positions, and of course really appreciate the help that others lent on our problems here. Either way, I wish everyone the best of luck and happy computing!
--
(Special thanks to Dave Olker for the reminder to put up this information for others to potentially benefit from here).
Warning, long message, but hopefully helpful for others that might hit similar issues.
--
First important point:
======================
If you are going to implement a Windows solution for your NFS storage needs do not go beyond Microsoft's Services for Unix version 3.5 (SFU 3.5) -- regardless of which server O/S you run it ontop of. Going to Windows Server 2003 R2 (Win2K3r2 or W2K3r2) and using it's 'new' Services for NFS components (SfNFS) is asking for trouble. You'll likely never get file locking working, and/or if you do, you'll need assistance from Microsoft Support that is not commonly available at this time (meaning you'll be calling for paid support and spending time discussing the issue with Microsoft's lower level techs as they bounce you from support group to support group to get you the help you need).
At the time of this writing (12/14/2006), there's no documented solution to the lack of file locking support for HP-UX clients while you can find documentation that will guide you through making it work on the older SFU product line (version 3.5, 3.0, and further back apparently).
Dave Olker previously provided me the links to the notes on the solution, but they are available from doing a little searching through Microsoft's support area (online knowledge base). Here's the important passages:
Microsoft Knowledge Base article 328858:
http://support.microsoft.com/kb/328858/en-us
Lock Requests With No Authentication Credentials
Server for NFS does not honor lock requests that do not have authentication credentials.
To allow NFS clients to lock files without providing any authentication credentials, Server for NFS has been modified to support advisory locks.
To support advisory locks, set the following registry value to zero to disable mandatory locks:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\NfsSvr\NlmNsm\EnableSMBLocking
If you change this registry value to "0" and then stop/start the NFS server, or reboot the Windows system (whichever is easier), then you should be able to lock files on the Windows system.
Second important point:
=======================
Again, stick with SFU 3.5 or older. It should still be available for download from Microsoft even though support for the product has reportedly ended as Microsoft has 'rolled the product' up into Windows Server 2003 R2 and updated the components (and slightly renamed it) as Services for NFS (SfNFS).
Another point:
==============
Don't be afraid to run SFU 3.5 (or older) on Windows Server 2003 R2. R2 is basically W2K3 'patched' and slightly improved. The improvements are very slight, and the behind the scenes stuff on R2 doesn't keep SFU from running on it. Feel free to run SFU 3.5 and use it as you wait for Microsoft to (hopefully) eventually document the proper configuration of the newer components so that they'll properly support HP-UX and other client systems.
Moving on:
==========
Don't try to use NIS on SFU or SfNFS. We tried here (sorry, can't get too detailed on where here is, but some folks might figure it out. Either way, it doesn't much matter as this general tip applies to anyone) and while it 'mostly works' we had no end of grief about maps. The simple username mapping function works fine, but the services map, protocol map and others never seemed to propagate out correctly. We gave up on the idea of using SFU or SfNFS to provide NIS functionality and stuck with NIS running on the HP-UX side. Much easier, much more reliable, and more well known.
Next:
=====
If you are having problems with NFS performance on your HP-UX clients don't overlook the obvious stuff and don't assume that your systems are really telling you the truth. We had a very long period of time where two HP-UX clients performed horribly slow when reading and writing from the Windows based NFS server. All along those clients would use space served on each other and run at normal speeds. Simple network tests (ping, etc.) showed no problems.
landiag showed proper configurations.
sam showed proper configurations.
And yet things weren't working right.
It turned out, for us, that the two problematic HP-UX clients had been set for 100Base-TX/FD *manually locked* Our network support people had instructed us to manually lock the port speed/configuration when running on their managed switches. We had moved to smaller non-managed switches that were behind a firewall device we manage. These non-managed switches were 8 port Gigabit Cisco switches. Once we moved to those ports we should have reconfigured the HP-UX clients to use auto-negotiation but we didn't realize that it would even be an issue.
Flipping the interface to auto-negotiation was literally like flipping on the light switch to get the network up to full speed. We went from literally 4k/sec to 4 - 6M/sec performance. An incredible boost (back to expected speeds).
Use your available resources:
=============================
This is a thank you to the great folks at HP for providing resources like these forums (where notes like this get archived and can be searched in the future for solutions for other folks having similar problems). The same for Microsoft and their online knowledge base. And of course the great people that do speak up in the forums and offer assistance for people that may need more than just a little hand-holding to get through tests, reconfigurations, and more.
Don't buy the marketing hype:
=============================
Getting back to the W2K3r2 and SfNFS issue -- one of the reasons we updated systems here to R2 and went to SfNFS was that there were notes suggesting that R2 would provide significant performance improvements. Hah! Marketing hype (apparently, at least from real world evidence here). We found no significant improvement, and in fact seemed to find some decrease in performance when working with R2 and the SfNFS product. Not that far out of norms, but enough to call b.s. on the claims that R2 and SfNFS was really any better than SFU 3.5
Hardware performance warning:
=============================
One of the things that caught us earlier on performance wise (before apparently killing ourselves with the network issue that affected the performance on the two problem systems) was a vast difference in disk I/O speed on our Windows NFS server versus local disk I/O speed and/or versus HP-UX to HP-UX NFS I/O speed.
There's likely always going to be a performance hit for NFS I/O speed versus local disk I/O. (Remember network overhead that comes into play). But there shouldn't have been such a huge difference in HP-UX NFS server versus Windows NFS server performance.
For us there was a huge performance difference though. On the order of 10 times slower than local disk I/O speed, and roughly 6 times slower when writing to the Windows NFS server. We struggled for a good while on the issue before realizing where the problem was stemming from.
We are using Dell hardware and had originally been working on implementing a Microsoft Clustered Solution. We started back with Windows 2000 Server, moved to Windows Server 2003 along the way (once we realized we thought we wanted clustering), then moved to Windows Server 2003 Enterprise once we hit the roadblock of finding Microsoft wanted more money to really include those features in their O/S.
Once we moved to a clustered solution, versus the single server we did initial testing of SFU on, and started running on it the users of the system began complaining about the disk I/O performance. Once we started doing some simple test measurements (time cp /path1/file1 /remotepath/remotefilename or similar) we started finding the differences in speed. That's part of what pushed us to pursue R2 and SfNFS.
Along the way we realized that using clustering on Dell's hardware meant losing Write-Back caching on the logical disk that was being served from. Once you flip a Dell PERC controller to Cluster mode, you find you can't use Write-Back caching at all. In fact, if the logical drives that are running from the PERC controller were set for Write-Back caching, you can't set the PERC controller to Cluster mode. You must go to Write-Through caching before you can do that.
The difference in performance between Write-Back caching on a typical PERC card versus Write-Through caching is huge. It is literally 6 - 10 times longer for disk I/O speeds with Write-Through versus using Write-Back caching.
If you don't understand the difference between the two, in Write-Through caching disk I/O isn't completed until the data is completely written to disk. There should (in theory) be almost no potential for data loss when doing Write-through caching. In Write-back caching, it's possible that data loss or corruption may occur because your system loses power during I/O operations. It's incredibly important to have your system running on uninterruptible power if you'll be using Write-back caching. It's also important that the users of the system be aware of the potential for data corruption if they choose to go with performance over reliability.
In our case, our users do want performance over reliability and high availability so we've ditched clustering and are using a single server directly connected to an external RAID. If that server fails, we have another server ready to take the connection the external RAID and with a little system admin work, we can bring the system back online in a very reasonable period of time.
Be secure:
==========
Something that set us back severely was a security incident that occured because we were mistakenly under the impression that the network we'd be using was firewall-ed or otherwise restricted/protected in some way.
We had no end of grief in trying to use Zone Alarm or the Integrity Desktop client software on our servers. Even using Windows Firewall on the Windows server caused us grief. All would interfere in someway with NIS, NFS, or our backup system. No matter what we did to allow the various programs that needed access to other systems to get through the firewalls, the software firewalls would interfere.
With all of those problems, and users of the system screaming at us to turn off the d@@@ed firewalls, and the system admins on the HP-UX side telling us to do the same, we did and we got burned.
There wasn't a firewall in place, and almost no other restrictions on the network that the resources were on. We went through an IT incident, lost physical resources and wait to get them back after going through the proper hands, and then had to start all over again.
It was in the starting all over again that we introduced our own hardware firewall device and reconfigured our network (which led to the later performance problems).
But, we did make the systems secure, and learned the hard way that we shouldn't have relied on someone else to provide security (the first layer of defense) for our systems. Not that we'd done that on our regular systems, but we had made the mistakes of ass-u-me-ing on the network with these systems on it. Never again.
I think that about covers things that others would learn from. I hope someone finds the info useful in the future. I can't promise to keep checking back in the ITRC forums, but I should get notification if this topic gets any replies. I'd love to try to help others if they are in similar positions, and of course really appreciate the help that others lent on our problems here. Either way, I wish everyone the best of luck and happy computing!
--
(Special thanks to Dave Olker for the reminder to put up this information for others to potentially benefit from here).
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2006 07:41 AM
тАО12-14-2006 07:41 AM
Re: Lessons learned in implenting a Windows NFS server for HP-UX client usage
Excellent Write-Up Barry!
So does this apply as well to NAS Heads running Microsoft OS or NAS Servers based on Windows Server 2003 File Server Edition?
I was briefly running SFU 3.5 for a client serving up to very heavy 100 Users (a mix of HP-UX, SOlaris and WinTel/Lintel workstations) (CAD/CAM Geo/Oil Industry) and the issue we had then was frequent lockups and crashing of our NAS Head..(a Built-Up Proliant hooked up by FC to an EVA 5K)... performance back then seemed acceptable. Last I've heard, they simply migrated to a Solaris NAS Head serving up multiple protocols - NFS, CIFS and IPX/SPX
So does this apply as well to NAS Heads running Microsoft OS or NAS Servers based on Windows Server 2003 File Server Edition?
I was briefly running SFU 3.5 for a client serving up to very heavy 100 Users (a mix of HP-UX, SOlaris and WinTel/Lintel workstations) (CAD/CAM Geo/Oil Industry) and the issue we had then was frequent lockups and crashing of our NAS Head..(a Built-Up Proliant hooked up by FC to an EVA 5K)... performance back then seemed acceptable. Last I've heard, they simply migrated to a Solaris NAS Head serving up multiple protocols - NFS, CIFS and IPX/SPX
Hakuna Matata.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2006 06:12 PM
тАО12-14-2006 06:12 PM
Re: Lessons learned in implenting a Windows NFS server for HP-UX client usage
kuddos for getting it to work, but the question remains why? looks like nfs is a bolted on, broken implementation on windows and you'd use that as a server?
i rather then use MS's own filesharing protocol (smb) and use the hpux client to connect to it.
i rather then use MS's own filesharing protocol (smb) and use the hpux client to connect to it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-15-2006 03:38 PM
тАО12-15-2006 03:38 PM
Re: Lessons learned in implenting a Windows NFS server for HP-UX client usage
In answer to Nelson - sorry, I have no info on NAS heads or Windows Server 2003 File Server Edition, though I suspect that those configurations will eventually lead to a solution to the file locking problem (probably just a configuration issue, but one that Bill Gates and his evil minions haven't yet documented <>) that I had here with R2 and the SfNFS components.
In answer to Dirk - why? because I was told to <>
Seriously, because it was the cheapest solution (outside of the horrendous amount of time we spent getting it working completely) that we could implement for the usage we'll have here.
We could have added a few drives to the few NFS clients we have and then done as used to be done here - a bit mesh of drives cross mounted all over the place. Several problems would have been there though, including a mix of older hardware that is somewhat limited on what drives can be used (as in what size, etc.) And of course, using drives cross mounted all over the place is terribly inefficient.
As is we have a fairly inexpensive disk farm that is running on a stable O/S (for our purposes) with NFS bolted on.
We paid nothing for SFU. On the other hand if we used SAMBA or something similar the HP-UX admins would be bugging the bejeezus outta me over all sorts of permissions issues and other things. This way the worst case for them is one where someone hasn't done the mapping in the SFU name mapping service that is necessary to get the accounts to agree on both sides.
Sadly (for HP) in the next 10 years (give or take) these systems and the other bunch of them that run the "production" environment here will likely get replaced with Windows systems, or perhaps with Linux systems. Hard to say for sure, but most likely the winner will be Windows. It's generally cheaper, is more prevalent, and runs what is needed to run without too much difficulty.
HP-UX will likely fade to the back and HP will likely wind up following IBM's lead going Linux or offering support for whatever Microsoft's 64 bit (or other server level hardware) server O/S.
(And understand please, I'm not trying to knock HP, I expect they'll be around for a long time, but I also expect HP-UX will eventually be retired in favor of going with some flavor(s) of Linux and having the former HP-UX development and support work become more of a drivers and support work for Linux instead.)
In answer to Dirk - why? because I was told to <>
Seriously, because it was the cheapest solution (outside of the horrendous amount of time we spent getting it working completely) that we could implement for the usage we'll have here.
We could have added a few drives to the few NFS clients we have and then done as used to be done here - a bit mesh of drives cross mounted all over the place. Several problems would have been there though, including a mix of older hardware that is somewhat limited on what drives can be used (as in what size, etc.) And of course, using drives cross mounted all over the place is terribly inefficient.
As is we have a fairly inexpensive disk farm that is running on a stable O/S (for our purposes) with NFS bolted on.
We paid nothing for SFU. On the other hand if we used SAMBA or something similar the HP-UX admins would be bugging the bejeezus outta me over all sorts of permissions issues and other things. This way the worst case for them is one where someone hasn't done the mapping in the SFU name mapping service that is necessary to get the accounts to agree on both sides.
Sadly (for HP) in the next 10 years (give or take) these systems and the other bunch of them that run the "production" environment here will likely get replaced with Windows systems, or perhaps with Linux systems. Hard to say for sure, but most likely the winner will be Windows. It's generally cheaper, is more prevalent, and runs what is needed to run without too much difficulty.
HP-UX will likely fade to the back and HP will likely wind up following IBM's lead going Linux or offering support for whatever Microsoft's 64 bit (or other server level hardware) server O/S.
(And understand please, I'm not trying to knock HP, I expect they'll be around for a long time, but I also expect HP-UX will eventually be retired in favor of going with some flavor(s) of Linux and having the former HP-UX development and support work become more of a drivers and support work for Linux instead.)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP