[Icecast-dev] icecast performance on many concurrent low-bitrate streams

Klaas Jan Wierenga k.j.wierenga at home.nl
Fri Jul 29 01:12:33 PDT 2005


Sorry I keep going on about this, but I'd like to understand the issues

I understand that the value of 1400 is not directly determining the MTU,
but if a connection has an MTU of >= 1400+TCP then it turns out that (on
my configuration of a linux-2.4 kernel) most packets will have a 1400
byte payload, some will have less on a link with MTU 1500. On a link
with a smaller MTU almost all packets will be filled to the maximum
payload size for the MTU.

What would be the arguments against buffering a little more? Assuming
the lowest bitrate is 16kbit/sec = 2kbytes/sec you could set the
batching value to 2048. This way you fill packets completely on links
up-to MTU 2048+TCP. Of course in a real-life system the maximum payload
is 1500-TCP.


Karl Heyes wrote:

>On Thu, 2005-07-28 at 23:29, Klaas Jan Wierenga wrote:
>>I've managed to patch up my branch of Icecast to do the batching. Checked
>>everything with valgrind and tested it extensively. It looks good. Tcpdump
>>now shows nice size frames (mostly 1400 bytes). Any reason why you're not
>>settings the MTU to something closer to 1500?
>It isn't setting the MTU, I'm just making sure initially that the block
>size is large enough to make fuller packets. Obviously it's not possible
>to determine the best size due to the fact that it's listener dependant
>and we are batching up at the reading not sending stage. 
>However as you have mentioned, the common MTU size is 1500 but that
>includes TCP so 1400 bytes sends was a quick/near estimation of a full
>packet, you can increase the block size by some more.  The code
>demonstrates the mechanism more than the absolute maximum currently.

More information about the Icecast-dev mailing list