<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Redhat clustering for squid in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362056#M35305</link>
    <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;All you need to do is set up a resource as an IP address, and put it into a failover domain that includes both nodes or:&lt;BR /&gt;&lt;BR /&gt;You can include the IP address with the squid package to insure the IP address and squid are always running on the same node.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Fri, 20 Feb 2009 13:10:08 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2009-02-20T13:10:08Z</dc:date>
    <item>
      <title>Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362050#M35299</link>
      <description>Dear friends,&lt;BR /&gt;&lt;BR /&gt;I am trying to create a redhat cluster for squid as cluster service. My requirement is like this.&lt;BR /&gt;&lt;BR /&gt;I would have 2 servers and both the servers would have squid running locally and the cache data is also independed each other, only the virtual IP is common for both the nodes.Only one node will server the request at a time.&lt;BR /&gt;&lt;BR /&gt;Incase any one of the node is going down the virtual IP should get failed over to the other node.&lt;BR /&gt;&lt;BR /&gt;Please let me know whether it is possible to create one like this? and if yes please any one could share the steps with me?&lt;BR /&gt;&lt;BR /&gt;I have configured a basic cluster and my cmanager daemon is running.&lt;BR /&gt;&lt;BR /&gt;clustat&lt;BR /&gt;&lt;BR /&gt;msg_open: No such file or directory&lt;BR /&gt;&lt;BR /&gt;Member Status: Quorate&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;  Member Name                        ID   Status&lt;BR /&gt;&lt;BR /&gt;  ------ ----                        ---- ------&lt;BR /&gt;&lt;BR /&gt;  MYUSVWSHQLABHA2                       1 Online&lt;BR /&gt;&lt;BR /&gt;  MYUSVWSHQLABHA1                       2 Online, Local&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Adithyan</description>
      <pubDate>Thu, 19 Feb 2009 08:28:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362050#M35299</guid>
      <dc:creator>Adithyan</dc:creator>
      <dc:date>2009-02-19T08:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362051#M35300</link>
      <description>I think that a better approach is to have both squid servers "serving" the clients. You can do that with piranha, part of the red hat cluster suite. Basically, you must configure a linux virtual server. The only problem is that in this configuration, you will need at least 3 servers.&lt;BR /&gt;&lt;BR /&gt;In case of a failover cluster, then is more simple, just create a IP address resource and attach to it a script resource, that points to the /etc/rc.d/init.d/squid script.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 19 Feb 2009 10:51:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362051#M35300</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2009-02-19T10:51:37Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362052#M35301</link>
      <description>&lt;!--!*#--&gt;Hi Ivan,&lt;BR /&gt;&lt;BR /&gt;Thanks for your quick responce. I have a  question here, can I configure the fail over cluster without a shared storage? If yes, do you have any sample config file?&lt;BR /&gt;&lt;BR /&gt;And do you know how to check the heartbeat status in redhat cluster and how would I dedicate an interface for heartbeat?&lt;BR /&gt;&lt;BR /&gt;Thanks again for your support.&lt;BR /&gt;Regds, Adi.</description>
      <pubDate>Thu, 19 Feb 2009 11:05:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362052#M35301</guid>
      <dc:creator>Adithyan</dc:creator>
      <dc:date>2009-02-19T11:05:53Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362053#M35302</link>
      <description>&amp;gt;&amp;gt;&amp;gt; can I configure the fail over cluster without a shared storage? If yes, do you have any sample config file?&lt;BR /&gt;&lt;BR /&gt;Yes you can, but I don't have a sample config file. Just use system-config-cluster.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; And do you know how to check the heartbeat status in redhat cluster and how would I dedicate an interface for heartbeat?&lt;BR /&gt;&lt;BR /&gt;I don't understand this question. You should take a look at the red hat cluster suite documentation available at the red hat page.&lt;BR /&gt;&lt;BR /&gt;One more thing, if your service will be started by red hat cluster, ensure to disable it from the start at boot with the chkconfig command.</description>
      <pubDate>Thu, 19 Feb 2009 11:22:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362053#M35302</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2009-02-19T11:22:32Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362054#M35303</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;With Red Hat Clustering this is possible.&lt;BR /&gt;&lt;BR /&gt;You build the two node cluster as you have done.&lt;BR /&gt;&lt;BR /&gt;What you need to add is a sync script that copies the squid cache data from the active node to the passive node.&lt;BR /&gt;&lt;BR /&gt;   rsync -avH --stats --delete -e ssh /var/httpd/ $othernode:/var/httpd/&lt;BR /&gt;&lt;BR /&gt;The variable othernode is set in a configuration file. The rsync command which saves a lot of time with files that have not changed uses ssh (to maintain encrypted data stream) from the active node to the passive node.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 19 Feb 2009 15:02:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362054#M35303</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-02-19T15:02:16Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362055#M35304</link>
      <description>Hi SEP,&lt;BR /&gt;&lt;BR /&gt;Thanks a ton for your advice.&lt;BR /&gt;&lt;BR /&gt;Here I am not looking for the cache data to be copied between servers. I just need to do the IP fail over. My confusion here is, how to assign the virtual IP address to the proxy?&lt;BR /&gt;&lt;BR /&gt;I have 2 proxy servers running on both the servers. Normally the clients access the proxy using either their name or IP address, how would I assign a virtual IP for it so that both the servers can serve (One at a time) clients using this VIP. This may be very simple in cluster but honestly I do not have any idea in doing this. If you could give me the steps it would be really grateful!&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Adi.&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Feb 2009 12:14:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362055#M35304</guid>
      <dc:creator>Adithyan</dc:creator>
      <dc:date>2009-02-20T12:14:58Z</dc:date>
    </item>
    <item>
      <title>Re: Redhat clustering for squid</title>
      <link>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362056#M35305</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;All you need to do is set up a resource as an IP address, and put it into a failover domain that includes both nodes or:&lt;BR /&gt;&lt;BR /&gt;You can include the IP address with the squid package to insure the IP address and squid are always running on the same node.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 20 Feb 2009 13:10:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/redhat-clustering-for-squid/m-p/4362056#M35305</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-02-20T13:10:08Z</dc:date>
    </item>
  </channel>
</rss>

