Receiving high frequency UDP packets: packet loss?

I have a C ++ application that uses a UDP server (using Boost.Asio) that accepts packets from a gigabit local network device at a high rate (3500 packets per second). Some users have reported multiple packet loss. So in the end I decided to run WireShark and my application in parallel to check if there were any packages WireShark could receive, but not my application.

I found that WireShark doesn't receive every packet, it seems like it is missing some. My app is skipping a few more frames which Wireshark received correctly.

My questions . Is it possible WireShark is receiving packets that my app doesn't have? I thought maybe WireShark has low-level access to the IP stack and packets are dropped by the OS even though they show up in WireShark? Is it possible that the operation in (1) is taking too long so that the next one async_receive_from

is called too late? I would like to get opinions on this matter. Thank.

Here is the code I'm using (pretty simple). udp_server.h:

#pragma once

#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <fstream>

const int MAX_BUFFER_SIZE = 65537;

using boost::asio::ip::udp;

class UDPServer
{
public:
    UDPServer(boost::asio::io_service& ios, udp::endpoint endpoint)
        :m_io_service(ios),
        m_udp_socket(m_io_service, endpoint)
    {
        // Resize the buffer to max size in the component property
        m_recv_buffer.resize(MAX_BUFFER_SIZE);

        m_output_file.open("out.bin", std::ios::out | std::ios::binary);

        StartReceive();
    }

public:
    void StartReceive()
    {
        m_udp_socket.async_receive_from(
            boost::asio::buffer(m_recv_buffer), m_remote_endpoint,
            boost::bind(&UDPServer::HandleReceive, this,
            boost::asio::placeholders::error,
            boost::asio::placeholders::bytes_transferred));
    }

private:
    void HandleReceive(const boost::system::error_code& error, std::size_t bytes_transferred)
    {
        if (!error || error == boost::asio::error::message_size)
        {
            // Write to output -- (1)
            m_output_file.sputn(&m_recv_buffer[0], bytes_transferred);

            // Start to receive again
            StartReceive();
        }
    }

    boost::asio::io_service&    m_io_service;
    udp::socket                 m_udp_socket;
    udp::endpoint               m_remote_endpoint;
    std::vector<char>           m_recv_buffer;
    std::filebuf                m_output_file;
};

      

main.cpp:

include <boost/asio.hpp>
#include "udp_server.h"

const unsigned short PORT_NUMBER = 44444;

int main()
{
    boost::asio::io_service ios;
    boost::asio::ip::udp::endpoint endpoint(udp::endpoint(udp::v4(), PORT_NUMBER));
    UDPServer server(ios, endpoint);
    ios.run();

    return 0;
}

      

+3


source to share


3 answers


Is it possible that WireShark will receive the packet and that my application is not?

Yes. In particular, each socket has its own fixed-size incoming buffer, and if the kernel is currently trying to add a new incoming packet to that buffer, the buffer does not have enough free space to hold that packet, then the packet will not be added to the buffer. and therefore an application reading incoming data from that socket will not receive it.



Considering it's entirely possible that the WireShark buffer was able to receive an incoming packet, but your own application buffer was not being executed.

+4


source


How many bytes are there in a packet? You make the buffer as 64K, let the size be 32K. Then it 3.5k * 32k

means 112MB / s, I don't think it gives a gigabit network. And you write packets to a file, it sputn

can block you from receiving more packets, therefore, the buffer is full by the driver, packets are discarded.



+1


source


In short, yes.

If you want to fix or improve this and not just ask about it, use a much larger socket receive buffer than the default, the same as the platform, and make sure your file writes are buffered as well, with the same large buffer as you can afford, considering all your data loss requirements in case of failure.

+1


source







All Articles