Adaptive jitter buffer without packet loss

I am in the process of developing an adaptive jitter buffer that increases and decreases the capacity as the jitter computation increases and decreases.

I see no reason to make any latency or throughput adjustments unless there is a buffer overflow followed by a packet of incoming packets that exceeds the capacity (assuming the buffer capacity equals the buffer depth / latency in the first place). As an example, if I receive 20ms packets, I could well implement a buffer 100m deep and therefore have a capacity for 5 packets. If 160ms is transmitted between packets, I can expect up to 8 packets to arrive almost immediately. At this point, I have two options:

  • remove three packages according to overflow rules
  • discard packets and increase buffer capacity and latency

Suppose choosing 2 and improving network conditions and packet delivery becomes regular again (jitter value drops). Now what? Again, I think I have two options:

  1. do nothing and live with increased latency
  2. reduce latency (and capacity)

With adaptive buffer, I think I should make choice 4, but that doesn't seem right because it requires me to artificially / arbitrarily drop audio packets that were deliberately stored when I took choice 2, faced with more jitter in the first queue.

It seems to me that the correct course of action is to initially accept choice # 1 in order to maintain latency when dropping packets, if necessary, which are delivered late due to increased jitter.

A similar scenario might be that instead of getting a packet of 8 packets after a 160ms tear, I only get 5 (maybe only 3 packets were lost). In this case, increasing the throughput of the buffer is not very beneficial, but further reduces the likelihood of overflow. But if the idea of โ€‹โ€‹overflow is something to be avoided (from the network side), then I would just make the buffer capacity by some fixed amount more than the configured "depth / latency" in the first place. In other words, if the overflow is not something caused by the fact that the local application cannot receive packets from the buffer in a timely manner, then the overflow can occur for only two reasons: either the sender is lying, or is sending packets faster than agreed (or is sending packets from future),or there is a gap between burst packets that exceeds my buffer depth.

Clearly, the whole point of the "adaptive" buffer would have to recognize the last condition, increase the buffer capacity, and avoid dropping any packets. But that brings me to my stated problem: How do I "adapt" to ideal settings when network jitter is cleared, while maintaining the same philosophy of "discard packets"?

Thoughts?

+3


source to share


1 answer


With companding. When the jitter is cleared, you concatenate packets and "speed up" the buffer. The offcourse merge will need appropriate handling, but the idea is to pop 2 20ms packets from ajb and create 30ms packets. you keep doing this until your buffer levels are normal.



Likewise, for underload, packets can be "stretched" in addition to introducing a delay.

+1


source







All Articles