[Icecast-dev] icecast relay server performance testing
Alejandro
cdgraff at gmail.com
Fri Jun 10 04:28:57 UTC 2016
Hi Zahar, what value has into
<workers>8</workers>
This value is recommended to set at 1 by virtual core.
I 'm using for some years, KH branch with 35 concurrent listeners into 8
core vm.
Regards,
Alejandro
2016-06-10 0:50 GMT-03:00 Zahar Popov <zahar.popov1978 at yandex.com>:
> Hello
> I'm trying to measure the performance of the icecast relay server on
> 64kbps streams.
>
> The server is running in AWS (i've tried various instance types) and the
> test clients are running on other machines in AWS. The test client is a
> very simple libuv application that sends a GET request and basically
> ignores everything it receives in the response. I'm using the icecast-kh
> fork.
>
> I'm able to go up to around 9K simultaneous connections to the server
> (from two machines). The CPU usage is low, about 15% or so (on one core).
> However connections are starting to be dropped. Checking netstat i see many
> frames being lost. Increasing the transmit queue length helped, but still i
> can't go beyond around 9K connections. I have increased the file descriptor
> limits and configured IRQ balancing (even though the problem doesn't seem
> to be CPU bound)
>
> It doesn't matter if i run one or more instances of the relay server, the
> limit seems to be OS global so when one instance is running with 5K
> connections and the other instance is getting close to 4K connections they
> both start dropping connections.
>
> I assume that there is some other setting of the stack that i didn't
> configure so i was wondering if anybody was able to run a few dozens of
> thousands of connections on one server.
>
> thanks!
> --zahar
>
> _______________________________________________
> Icecast-dev mailing list
> Icecast-dev at xiph.org
> http://lists.xiph.org/mailman/listinfo/icecast-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xiph.org/pipermail/icecast-dev/attachments/20160610/4922f570/attachment.html>
More information about the Icecast-dev
mailing list