[Icecast-dev] icecast relay server performance testing

Popov, Zahar zahar.popov1978 at yandex.com
Fri Jun 10 07:47:29 UTC 2016


I tried running curl, but the results are around the same, near 10K connections. No cores (both the server and client) are busy.

thanks
—oleg

> On Jun 10, 2016, at 2:47 PM, Popov, Zahar <zahar.popov1978 at yandex.com> wrote:
> 
> I’m going to try to run multiple curl processes. The libuv code that i wrote is not of a very good quality (even though it’s really simple).
> 
> thanks!
> —zahar
>> On Jun 10, 2016, at 2:43 PM, Alejandro <cdgraff at gmail.com <mailto:cdgraff at gmail.com>> wrote:
>> 
>> In the past, i had used this method:
>> 
>> http://icecast.org/loadtest/1/ <http://icecast.org/loadtest/1/>
>> 
>> But to be honest, nothing be compared with real use case, we found many issues when the connections arrive from many differents IPs, the stress test open all from small set of IPs, but almost this test case is used for many others, and present good results.
>> 
>> 2016-06-10 2:40 GMT-03:00 Popov, Zahar <zahar.popov1978 at yandex.com <mailto:zahar.popov1978 at yandex.com>>:
>> I wrote a test application which is based on libuv. iptables is disabled.
>> I’m running the test application from two other machines.
>> 
>> Do you have any suggestions for testing?
>> 
>> thanks!
>> —zahar
>> 
>> 
>> 
>>> On Jun 10, 2016, at 2:38 PM, Alejandro <cdgraff at gmail.com <mailto:cdgraff at gmail.com>> wrote:
>>> 
>>> Zahar, how are you testing? with some CURL stress test? BTW, IPTABLES is enabled?
>>> 
>>> I was running most time in VMWARE, but i run 10k users in medium size box in AWS, just move out for the high transfer cost.
>>> 
>>> 2016-06-10 2:36 GMT-03:00 Zahar Popov <zahar.popov1978 at yandex.com <mailto:zahar.popov1978 at yandex.com>>:
>>> Hi Alejandro,
>>> Here is mine:
>>> <limits>
>>> <workers>4</workers>
>>> <clients>100000</clients>
>>> <sources>2000</sources>
>>> <queue-size>102400</queue-size>
>>> <client-timeout>30</client-timeout>
>>> <header-timeout>15</header-timeout>
>>> <source-timeout>10</source-timeout>
>>> <burst-on-connect>1</burst-on-connect>
>>> <burst-size>65536</burst-size>
>>> </limits>
>>> 
>>> Your queue-size is larger, i will try to increase that.
>>>  
>>> thanks!
>>> --zahar
>>>  
>>> 10.06.2016, 14:31, "Alejandro" <cdgraff at gmail.com <mailto:cdgraff at gmail.com>>:
>>>> Please share your config, almost the LIMITS part, this is my setup
>>>>  
>>>>     <limits>
>>>>     <workers>8</workers>
>>>>         <clients>100000</clients>
>>>>         <sources>700</sources>
>>>>         <queue-size>524288</queue-size>
>>>>         <client-timeout>30</client-timeout>
>>>>         <header-timeout>15</header-timeout>
>>>>         <source-timeout>10</source-timeout>
>>>>         <burst-size>65535</burst-size>
>>>>     </limits>
>>>> 
>>>> 2016-06-10 2:28 GMT-03:00 Popov, Zahar <zahar.popov1978 at yandex.com <mailto:zahar.popov1978 at yandex.com>>:
>>>> Hi Alejandro,
>>>> Many thanks for your message.
>>>>  
>>>> I changed it to 4 (i have 4 cores), but it didn’t really help. I see that all 4 cores are now working, but the connections are still being dropped.
>>>>  
>>>> Which VM type are you using? Or it’s not running on AWS?
>>>>  
>>>> thanks!
>>>> —zahar
>>>> 
>>>>> On Jun 10, 2016, at 1:29 PM, Alejandro <cdgraff at gmail.com <mailto:cdgraff at gmail.com>> wrote:
>>>>> 
>>>>> Sorry, 35K concurrent with 8 workers at 60% cpu
>>>>> 
>>>>> 2016-06-10 1:28 GMT-03:00 Alejandro <cdgraff at gmail.com <mailto:cdgraff at gmail.com>>:
>>>>> Hi Zahar, what value has into 
>>>>>  
>>>>>     <workers>8</workers>
>>>>>  
>>>>> This value is recommended to set at 1 by virtual core.
>>>>>  
>>>>> I 'm using for some years, KH branch with 35 concurrent listeners into 8 core vm.
>>>>>  
>>>>> Regards, 
>>>>> Alejandro
>>>>> 
>>>>> 2016-06-10 0:50 GMT-03:00 Zahar Popov <zahar.popov1978 at yandex.com <mailto:zahar.popov1978 at yandex.com>>:
>>>>> Hello
>>>>> I'm trying to measure the performance of the icecast relay server on 64kbps streams.
>>>>>  
>>>>> The server is running in AWS (i've tried various instance types) and the test clients are running on other machines in AWS. The test client is a very simple libuv application that sends a GET request and basically ignores everything it receives in the response. I'm using the icecast-kh fork.
>>>>>  
>>>>> I'm able to go up to around 9K simultaneous connections to the server (from two machines). The CPU usage is low, about 15% or so (on one core). However connections are starting to be dropped. Checking netstat i see many frames being lost. Increasing the transmit queue length helped, but still i can't go beyond around 9K connections. I have increased the file descriptor limits and configured IRQ balancing (even though the problem doesn't seem to be CPU bound)
>>>>>  
>>>>> It doesn't matter if i run one or more instances of the relay server, the limit seems to be OS global so when one instance is running with 5K connections and the other instance is getting close to 4K connections they both start dropping connections.
>>>>>  
>>>>> I assume that there is some other setting of the stack that i didn't configure so i was wondering if anybody was able to run a few dozens of thousands of connections on one server. 
>>>>>  
>>>>> thanks!
>>>>> --zahar
>>>>> _______________________________________________
>>>>> Icecast-dev mailing list
>>>>> Icecast-dev at xiph.org <mailto:Icecast-dev at xiph.org>
>>>>> http://lists.xiph.org/mailman/listinfo/icecast-dev <http://lists.xiph.org/mailman/listinfo/icecast-dev>
>>> 
>> 
>> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xiph.org/pipermail/icecast-dev/attachments/20160610/d2feba51/attachment.html>


More information about the Icecast-dev mailing list