<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Retrieve a webpage from a shell script in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418880#M707083</link>
    <description>You can't install, but you can compile... just get sources (either for wget or perl modules) compile, and run from your directory.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
    <pubDate>Wed, 10 Nov 2004 06:18:49 GMT</pubDate>
    <dc:creator>Fred Ruffet</dc:creator>
    <dc:date>2004-11-10T06:18:49Z</dc:date>
    <item>
      <title>Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418876#M707079</link>
      <description>O wise ones,&lt;BR /&gt;&lt;BR /&gt;I'm looking for a way to retrieve a webpage from a shellscript or command line. The webpage contains basic system info like output from 'bdf' but I do not have a prompt login on that machine as it is fully serviced by others.&lt;BR /&gt;&lt;BR /&gt;How do i get hold of that system info page on a daily basis best? I tried perl but we don't have the LWP module to do something like:&lt;BR /&gt;&lt;BR /&gt;#!/usr/contrib/bin/perl&lt;BR /&gt;use LWP::Simple;&lt;BR /&gt;$AS=get('http://somelanhost/cgi-bin/solaris_cmd?df');&lt;BR /&gt;print $AS;&lt;BR /&gt;&lt;BR /&gt;I'm not allowed to install tools like 'lynx' et al to retrieve the info.&lt;BR /&gt;&lt;BR /&gt;Has hp-ux 11.11 any standard tools to retrieve a webpage from the command line?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance, John&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Nov 2004 05:12:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418876#M707079</guid>
      <dc:creator>John_880</dc:creator>
      <dc:date>2004-11-10T05:12:29Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418877#M707080</link>
      <description>Hi John,&lt;BR /&gt;&lt;BR /&gt;Try 'wget' from here.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://hpux.cs.utah.edu/hppd/hpux/Gnu/wget-1.9.1/" target="_blank"&gt;http://hpux.cs.utah.edu/hppd/hpux/Gnu/wget-1.9.1/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;There are few more tools available in the same web site. Go to home and search for web.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Wed, 10 Nov 2004 05:15:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418877#M707080</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2004-11-10T05:15:34Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418878#M707081</link>
      <description>AS said by Sridhar, wget is probably your solution. It has very interresting options such as background download, recursive download, continue after error, bandwidth limitation and so on.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Nov 2004 05:47:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418878#M707081</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-11-10T05:47:21Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418879#M707082</link>
      <description>If you cannot use lynx, you can also probably not use wget, how phenomenal and right it would be.&lt;BR /&gt;&lt;BR /&gt;You can however use Perl, as your post shows, but as you do not have LWP::Simple, just fake it using CORE modules:&lt;BR /&gt;&lt;BR /&gt;--8&amp;lt;---&lt;BR /&gt;#!/pro/bin/perl -w&lt;BR /&gt;&lt;BR /&gt;use strict;&lt;BR /&gt;use IO::Socket::INET;&lt;BR /&gt;&lt;BR /&gt;my $itrcweb = "&lt;A href="http://forums1.itrc.hp.com/service/forums" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums&lt;/A&gt;";&lt;BR /&gt;my $server = "forums1.itrc.hp.com";&lt;BR /&gt;my $ux_url = "service/forums";&lt;BR /&gt;&lt;BR /&gt;sub webget ($;$)&lt;BR /&gt;{&lt;BR /&gt;    my ($url, $F) = @_;&lt;BR /&gt;    local $/;&lt;BR /&gt;&lt;BR /&gt;    my $peer = $url =~ s{^(\w+)://}{} ? $1 : "http(80)";&lt;BR /&gt;    my ($server, $page) = ($url =~ m{^([^/]+)(?:/(.*)});&lt;BR /&gt;&lt;BR /&gt;    (my $f = $page) =~ s:.*/::;&lt;BR /&gt;    $F ||= $f;&lt;BR /&gt;    unlink $F;&lt;BR /&gt;&lt;BR /&gt;    printf STDERR "get %-38s\t", $F;&lt;BR /&gt;    my $socket = IO::Socket::INET-&amp;gt;new (&lt;BR /&gt; PeerHost =&amp;gt; $server,&lt;BR /&gt; PeerPort =&amp;gt; $peer,&lt;BR /&gt; Proto    =&amp;gt; "tcp",&lt;BR /&gt; timeout  =&amp;gt; 90) or die;&lt;BR /&gt;    print $socket &lt;BR /&gt; "GET $page\n",&lt;BR /&gt; "Host: $server\n",&lt;BR /&gt; "User-Agent: Internet Explorer\n",&lt;BR /&gt; "\n";&lt;BR /&gt;&lt;BR /&gt;    my $t = 0;&lt;BR /&gt;    my $data = &amp;lt;$socket&amp;gt;;&lt;BR /&gt;    if (ref $F) {&lt;BR /&gt;        $$F = $data;&lt;BR /&gt;        return $t;&lt;BR /&gt;        }&lt;BR /&gt;    if ($data) {&lt;BR /&gt; $t = length $data;&lt;BR /&gt;      undef $p;&lt;BR /&gt; open  $p, "&amp;gt; $F" or die "$F: $!\n";&lt;BR /&gt; print $p $data;&lt;BR /&gt; close $p;&lt;BR /&gt; }&lt;BR /&gt;    $t;&lt;BR /&gt;    } # webget&lt;BR /&gt;&lt;BR /&gt;webget ("&lt;A href="http://somehost.lan/cgi-bin/solaris_cmd?df" target="_blank"&gt;http://somehost.lan/cgi-bin/solaris_cmd?df&lt;/A&gt;", "solaris.df");&lt;BR /&gt;&lt;BR /&gt;my $sol_df;&lt;BR /&gt;webget ("&lt;A href="http://somehost.lan/cgi-bin/solaris_cmd?df" target="_blank"&gt;http://somehost.lan/cgi-bin/solaris_cmd?df&lt;/A&gt;", \$sol_df);&lt;BR /&gt;&lt;BR /&gt;--&amp;gt;8---&lt;BR /&gt;&lt;BR /&gt;(Not tested)&lt;BR /&gt;&lt;BR /&gt;Enjoy, Have FUN! H.Merijn</description>
      <pubDate>Wed, 10 Nov 2004 06:15:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418879#M707082</guid>
      <dc:creator>H.Merijn Brand (procura</dc:creator>
      <dc:date>2004-11-10T06:15:48Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418880#M707083</link>
      <description>You can't install, but you can compile... just get sources (either for wget or perl modules) compile, and run from your directory.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Nov 2004 06:18:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418880#M707083</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-11-10T06:18:49Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418881#M707084</link>
      <description>I've used wget and cURL which work fine.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.gnu.org/software/wget/wget.html" target="_blank"&gt;http://www.gnu.org/software/wget/wget.html&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://curl.haxx.se/" target="_blank"&gt;http://curl.haxx.se/&lt;/A&gt;</description>
      <pubDate>Wed, 10 Nov 2004 07:56:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418881#M707084</guid>
      <dc:creator>Baz_2</dc:creator>
      <dc:date>2004-11-10T07:56:44Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418882#M707085</link>
      <description>you can also use /usr/bin/sh and telnet:&lt;BR /&gt;&lt;BR /&gt;#!/sbin/sh&lt;BR /&gt;if [[ $# -lt 1 ]] ;then&lt;BR /&gt;  print "supply stock symbol on command line"&lt;BR /&gt;  exit 1&lt;BR /&gt;fi&lt;BR /&gt;for sym in $@&lt;BR /&gt;{&lt;BR /&gt;(stelnet quote.yahoo.com 80 &amp;lt;&amp;lt; eof&lt;BR /&gt;GET &lt;A href="http://quote.yahoo.com/download/quotes.csv?symbols=${sym}&amp;amp;format=sl1d1t1" target="_blank"&gt;http://quote.yahoo.com/download/quotes.csv?symbols=${sym}&amp;amp;format=sl1d1t1&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;eof&lt;BR /&gt;) 2&amp;gt;/dev/null |awk '$0 ~ /\"/ {print}'&lt;BR /&gt;}&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Nov 2004 10:27:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418882#M707085</guid>
      <dc:creator>Michael Roberts_3</dc:creator>
      <dc:date>2004-11-10T10:27:40Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418883#M707086</link>
      <description>I'd just use "wget" &lt;BR /&gt;&lt;BR /&gt;wget &lt;A href="http://hostname.ext/page.html" target="_blank"&gt;http://hostname.ext/page.html&lt;/A&gt; &lt;BR /&gt;&lt;BR /&gt;if you need to set it up where it's a timed thing then set up a cron job to do it. &lt;BR /&gt;&lt;BR /&gt;You may want to Google "hcat" and "hgrepurl."&lt;BR /&gt;&lt;BR /&gt;these are little perl scripts that OReilly press has on their FTP server. If you can't find them there I am attaching a tarball. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 10 Nov 2004 15:35:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418883#M707086</guid>
      <dc:creator>rmueller58</dc:creator>
      <dc:date>2004-11-10T15:35:14Z</dc:date>
    </item>
    <item>
      <title>Re: Retrieve a webpage from a shell script</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418884#M707087</link>
      <description>Hi all, thanks for the replies. I changed to a machine with the proper perl modules and all is fine. I will definately try to get wget installed.&lt;BR /&gt;&lt;BR /&gt;Cheers! John</description>
      <pubDate>Wed, 10 Nov 2004 20:46:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/retrieve-a-webpage-from-a-shell-script/m-p/3418884#M707087</guid>
      <dc:creator>John_880</dc:creator>
      <dc:date>2004-11-10T20:46:23Z</dc:date>
    </item>
  </channel>
</rss>

