[Icecast-dev] icecast performance on many concurrent
karl at xiph.org
Wed Jul 27 07:47:15 PDT 2005
On Wed, 2005-07-27 at 09:47, Klaas Jan Wierenga wrote:
> Hi all,
> I'm running an Icecast-2.2 server with at peak times some 50 sources and 500 concurrent listeners all using low-bitrate 16kpbs streams. I'm experiencing some connection losses at these peak times ("Client connection died" message in error.log).
> The machine running Icecast has a 100Mbit connection to the internet. It is a Celeron 2.4Ghz machine with 1Gbyte of main memory. The CPU load at these peak times is normal at 0.4 (40%), the load is relatively high, averaging out at 0.4 with occasional peaks to 5.0.
> I've analysed the ethernet packets on some of the listeners connections and found that Icecast sends many small packets (200-300) bytes. This led me to look at the interrupt rate during peak times. At these times the interrupt rate reaches 10000 interrupts per second.
> Investigating a bit further I discovered that Icecast is turning off the Nagle algorithm by setting TCP_NODELAY option on the client sockets. This results in many small packets because a packet is sent as soon as possible without combining packets into larger packets. Would it be safe to turn on the Nagle algorithm (by removing the sock_set_nodelay() calls in appropriate places) to try to reduce the interrupt rate for many concurrent low-bitrate streams?
This has been reported to me already, it occurs with low bitrate non-ogg
streams. You can remove the sock_set_nodelay as I don't think it really
does anything for us at all, but it may not help you either as it will
be up to the kernel to send the packets so it may just send those small
reads anyway (depends on various factors).
In kh14, I have done some batching up of reads on the input for
pass-through streams (like mp3/aac etc). This makes the writes to
listeners work on larger data (nearer to 1500 bytes) so protocol
overhead is made lower.
The feedback I've had so far is positive so it could be merged into
trunk without problem. feel free to try it out
More information about the Icecast-dev