- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- sFTP server setup ( Active/active) mode
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-27-2013 08:35 AM
10-27-2013 08:35 AM
sFTP server setup ( Active/active) mode
Hi All,
I am looking to setup two sFTP servers on RedHat Linux 6.4. The below are the requirements.
1) sFTP users would be authenicated via AD
2) sFTP chroot (jail) must be enabled / fucntional
3) We have to setup 2 sFTP servers , they would be load balanced( round robin) via Load Balancers and the requirement is to have BOTH the nodes Active in the pool.
4) sFTP users will have their own filesystems / mount points where files will be uploaded
Challenge is that these filesystems are on SAN and we need to be able to present these SAN filesystems ( same mount points) on BOTH the nodes since the 2 nodes will be Active/Active on the Load Balancer.
Pleae advice the best approach to accomplish this.
Appreciate your help.
Regards,
Raja
- Tags:
- sftp
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2013 01:16 AM
10-28-2013 01:16 AM
Re: sFTP server setup ( Active/active) mode
One possible solution to the filesystem issue is to have the users' filesystems available on some other host (ora NAS device) as NFS shares, which will then be mounted to both sFTP servers (optionally using the automounter).
If there will be a lot of users, this is probably the easiest way to implement this.
---
If the sFTP servers must not depend on any other hosts/NAS devices, then the simplest implementation would be to use a cluster filesystem (GFS2), which requires setting up a RedHat Cluster for the coordination of cluster filesystem locks. But this will probably be inconvenient if you have more than just a few users. Having a large number of GFS filesystems may be a waste of system resources.
---
An alternative solution would be to set up RedHat Cluster on the sFTP servers and configure a highly available NFS service on the cluster. The cluster node that is currently running the NFS service could be called the "master" node: only it will mount and access the SAN disks directly, and present a floating IP address for NFS access. Both nodes will then use an automounter to mount the users' filesystems using the floating IP to access the NFS shares.
This is not a particularly elegant solution, but it should be workable: the master role should be switchable from one server to another without terminating any existing sFTP connections, although the switch will cause a delay on NFS file operations. If the switch happens because the previous master node crashed, the new master node will have to run filesystem checks before it can mount the filesystems, which will cause a larger delay.
[If you attempt to mount a regular non-cluster filesystem (ext2/ext3/ext4 etc.) in two or more servers simultaneously, both servers will assume they're the only server accessing the filesystem, and will each cache the filesystem metadata to speed up filesystem access. Because of this caching, their ideas regarding filesystem state will get out of sync as soon as there are any write operations, and then the filesystem will become corrupted for sure.]