Socket Server Damage
This socket server has some bug most likely related to pthread. If it continues to accept client connections, it will start hanging after a while. It doesn't look like a memory leak because the memory occupied by the program remains the same, but when I connect to the telnet client it just hangs and does nothing.
It still puts ("Handler assigned"); but then does nothing. Any ideas what might be causing this?
/*
C socket server example, handles multiple clients using threads
*/
#include<stdio.h>
#include<string.h> //strlen
#include<stdlib.h> //strlen
#include<sys/socket.h>
#include<arpa/inet.h> //inet_addr
#include<unistd.h> //write
#include<pthread.h> //for threading , link with lpthread
//the thread function
void *connection_handler(void *);
int main(int argc , char *argv[])
{
int socket_desc , client_sock , c , *new_sock;
struct sockaddr_in server , client;
//Create socket
socket_desc = socket(AF_INET , SOCK_STREAM , 0);
if (socket_desc == -1)
{
printf("Could not create socket");
}
puts("Socket created");
//Prepare the sockaddr_in structure
server.sin_family = AF_INET;
server.sin_addr.s_addr = INADDR_ANY;
server.sin_port = htons( 8888 );
//Bind
if( bind(socket_desc,(struct sockaddr *)&server , sizeof(server)) < 0)
{
//print the error message
perror("bind failed. Error");
return 1;
}
puts("bind done");
//Listen
listen(socket_desc , 3);
//Accept and incoming connection
puts("Waiting for incoming connections...");
c = sizeof(struct sockaddr_in);
//Accept and incoming connection
puts("Waiting for incoming connections...");
c = sizeof(struct sockaddr_in);
while( (client_sock = accept(socket_desc, (struct sockaddr *)&client, (socklen_t*)&c)) )
{
puts("Connection accepted");
pthread_t sniffer_thread;
new_sock = malloc(1);
*new_sock = client_sock;
if( pthread_create( &sniffer_thread , NULL , connection_handler , (void*) new_sock) < 0)
{
perror("could not create thread");
return 1;
}
//Now join the thread , so that we dont terminate before the thread
//pthread_join( sniffer_thread , NULL);
puts("Handler assigned");
}
if (client_sock < 0)
{
perror("accept failed");
return 1;
}
return 0;
}
/*
* This will handle connection for each client
* */
void *connection_handler(void *socket_desc)
{
//Get the socket descriptor
int sock = *(int*)socket_desc;
int read_size;
char *message , client_message[2000];
//Send some messages to the client
message = "Greetings! I am your connection handler\n";
write(sock , message , strlen(message));
message = "Now type something and i shall repeat what you type \n";
write(sock , message , strlen(message));
//Receive a message from client
while( (read_size = recv(sock , client_message , 2000 , 0)) > 0 )
{
//Send the message back to client
write(sock , client_message , strlen(client_message));
}
if(read_size == 0)
{
puts("Client disconnected");
fflush(stdout);
}
else if(read_size == -1)
{
perror("recv failed");
}
//Free the socket pointer
free(socket_desc);
return 0;
}
+2
source to share
1 answer
- Don't assume that writing to a socket writes all bytes. Always check the return value. Instead of writing, use your own function
sendbuf()
that writes to the socket in a loop and only returns after all bytes of your buffer have been sent. ` - You must attach to each created stream exactly once, otherwise you will miss the stream descriptor. If you don't want to join the stream, you have 2 options:
- Create it as a detached one by specifying an attribute parameter for pthread where you tell
pthread_create()
to create the thread as detached. Find tutorials that use the functionpthread_attr_setdetachstate()
to learn how to do it. - After creating a thread call
pthread_detach()
on it to tell the pthread library that you don't want to join the thread.
- Create it as a detached one by specifying an attribute parameter for pthread where you tell
- Since you are receiving and sending in turn, if both the server and the client are using blocking sockets, it can happen that the send buffers fill up at both ends if the client tries to send a large enough buffer and this results in a deadlock. Help solve one of the following solutions:
- Use
setsockopt()
on your server to set send and receive timeouts with socket parametersSO_SNDTIMEO
andSO_RCVTIMEO
and and possibly adjust the send / receive buffer size with parametersSO_SNDBUF
SO_RCVBUF
if you like, but I would not ask those two for no reason. - At least one of the peers (preferably a server) must receive and send at the same time using an asnyc socket.
- Use
- Use
read_size
insteadstrlen(client_message)
when sending a message to a client. Assuming the received chunk is null terminated is not correct even if the client sent a null terminated message, because you might receive it as fragmented.
+4
source to share