Operating System - HP-UX
1833704 Members
3479 Online
110062 Solutions
New Discussion

Socket Client set timeout

 
SOLVED
Go to solution
Alfonso_15
Advisor

Socket Client set timeout

Hi friends. I have an application Client in HP10.20. How I can put a timeout(example 5 seconds) in the connect(socket, addr, addrlen) sentence for return after this 5 seconds if can not connect to server during this time. Right now the sentence returns after 75 seconds, and is to late for my application takes other actions.
Thanks a lot for your help.
alfonsoarias
4 REPLIES 4
Mark Grant
Honored Contributor

Re: Socket Client set timeout

Not sure what language you are using here but generally, you want to set an "alarm" for 5 seconds before you do the "socket". "Perl" has an "alarm" function for this. If you trap "SIGALRM" you then have your timeour.
Never preceed any demonstration with anything more predictive than "watch this"
Alfonso_15
Advisor

Re: Socket Client set timeout

Hi. Thanks for the answer. I am using gcc. and the delay is in the sentence connect(socket, struct sock_addr, addrlen); This time out is not a property configurable for the socket? or maybe in the struct sock_addr? Thanks again.
alfonsoarias
susan gregory_1
Valued Contributor

Re: Socket Client set timeout

I would guess that one of the ndd settings may be affecting your program's connect. I don't know of a way to get connect to time out like you can a select(). It sounds like something in your network protocol is set to timeout after 75 seconds (and it could be two or more parms, such as a "retransmission interval" multiplied by a "number of retransmission attempts" before giving up).
You may want to use ndd -get to see your settings and may want to very cautiously modified a few. You may want to call the HP RC for some advice on ndd as setting your timeout to 5 is going to affect more than just your program.
Ken_109
Advisor
Solution

Re: Socket Client set timeout

Alfonso,Here is my solution: TEST and RETEST at your own risk... :)#include /* global */static jmp_buf env;void alarm_handler (int sig){ longjmp (env, 1);}... In your subroutine setup socket stuff..varsetc... /* Set up signal handler */ signal (SIGALRM, alarm_handler) /* open up a connection to the local server port */ if (setjmp (env) == 0) { alarm (20); /* 20 Second timeout */ if ((clnt = connect(socket, addr, addrlen) <0) { alarm (0); fprintf (stderr, "Could not open socket on Host (%s) on port(%s) \n", skthost, port); return FAILURE; } alarm (0); } else { fprintf (stderr, "Timeout opening socket on Host (%s) on port(%s)\n", skthost, port); return FAILURE; }Now for what is going on here... The signal handler "alarm_handler" returns a 1 to the setjum(env) call. Setjmp saves the stack context, it returns 0 when returned directly and a 1 when returned from longjmp.If the alarm is triggered (ie alarm(20) after 20 seconds... then the signal handler is executed and longjmp is called. The context is reset and a 1 is returned dropping you into the else section.Good luck... Ken