[Icecast] IceCast Server (2.3.2) Limits? Disconnections due to user and memory?

"Thomas B. Rücker" thomas at ruecker.fi
Tue Feb 24 04:56:11 PST 2015


On 02/24/2015 11:40 AM, Dean Sauer wrote:
> On Mon, 23 Feb 2015 06:26:16 +0000, Thomas B. Rücker wrote:
>
>> We'd need error logs from such an incident, perferably at  log level 4.
> The error log from Icecast is as follows when this happened last:
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/feeddown4.sh
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/feeddown4usc.sh
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/kingkendown.sh
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/feeddown4.sh
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/feeddown4.sh
> [2015-02-15  16:24:08] DBUG source/get_next_buffer last 1424035387, 
> timeout 60, now 1424035448
> [2015-02-15  16:24:08] WARN source/get_next_buffer Disconnecting source 
> due to socket timeout
> [2015-02-15  16:24:08] INFO source/source_shutdown Source "/feed" exiting
> [2015-02-15  16:24:08] DBUG source/source_run_script Starting command /
> etc/icecast2/kingkendown.sh
> [2015-02-15  16:24:08] DBUG stats/modify_node_event update node listeners 
> (106)

I wonder why it claims, that the source shutting down is "/feed" every
time, that's highly suspicious.
Especially as it seems to run a different script for some of them.
Please provide icecast.xml with passwords removed.


> Doubtful I get this either...
>
> There also is the "spawning" issue...I've accessed what if/is causing 
> Icecast to "spawn" new server instances... as I recall the replies are 
> that Icecast CAN NOT SPAWN new instances... 

Correct. If you see other processes, then this might only be related to
the scripts you spawn from inside Icecast.
I'd recommend to avoid long running scripts in general. If you need
something long lived, spawn an own process for that, which is not
attached to the original script.


> Before I rebooted this thing there were like 30+ instances of Icecast 
> running... after a week there are now 5

What are you doing inside the scripts that you spawn?
Do those scripts terminate cleanly? How long do they run?
Can you attach such a script for further analysis?


> top - 06:13:09 up 8 days, 18:25,  0 users,  load average: 0.00, 0.03, 0.00
> Tasks:  15 total,   1 running,  14 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.1%us,  0.5%sy,  0.0%ni, 99.4%id,  0.0%wa,  0.0%hi,  0.0%si,  
> 0.0%st
> Mem:   2097152k total,    83800k used,  2013352k free,        0k buffers
> Swap:  2097152k total,        0k used,  2097152k free,    36984k cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  
> COMMAND                                                                                                                
>   668 icecast2  20   0 1841m  11m 1480 S    3  0.6 698:52.42 
> icecast2                                                                                                               
>     1 root      20   0 23768  880  504 S    0  0.0   0:01.76 
> init                                                                                                                   
>     2 root      20   0     0    0    0 S    0  0.0   0:00.00 
> kthreadd/54931                                                                                                         
>     3 root      20   0     0    0    0 S    0  0.0   0:00.00 
> khelper/54931                                                                                                          
>   221 root      20   0 49992 1240  628 S    0  0.1   0:10.51 
> sshd                                                                                                                   
>   347 root      20   0 19068  600  396 S    0  0.0   0:01.24 
> cron                                                                                                                   
>   349 root      20   0  6408  624  464 S    0  0.0   0:10.64 
> syslogd                                                                                                                
>   646 Debian-e  20   0 47480  880  432 S    0  0.0   0:00.21 
> exim4                                                                                                                  
>   732 root      20   0 36012  332   40 S    0  0.0   0:19.86 
> vzctl                                                                                                                  
>   733 root      20   0 18272 1152  444 S    0  0.1   0:00.09 
> bash                                                                                                                   
>  1082 icecast2  20   0 1647m 6152  104 S    0  0.3   0:00.00 
> icecast2                                                                                                               
>  4009 icecast2  20   0 1841m  10m  112 S    0  0.5   0:00.00 
> icecast2                                                                                                               
>  8120 icecast2  20   0 1776m 9036  104 S    0  0.4   0:00.00 
> icecast2                                                                                                               
>  8244 root      20   0 17160  892  632 R    0  0.0   4:52.08 
> top                                                                                                                    
> 29360 icecast2  20   0 1841m  10m  112 S    0  0.5   0:00.00 
> icecast2                                                                                                               
>


A Vss of 1841M is highly suspicious. In combination with an Rss of <10M
and almost all memory being listed as free it looks like a massive
memory allocation of some sort.

Have you modified any of the buffer or burst parameters in Icecast? Can
you provide your icecast.xml (passwords removed) as an attachment?

This might also be related to a security issue that was discovered and
fixed last year:
https://trac.xiph.org/ticket/2089
Fix was part of the 2.4.1 release of Icecast. I am not aware if the fix
was backported by Debian/Ubuntu and released to their users, you'd need
to check with Ubuntu. At least Fedora and CentOS/EPEL upgraded to 2.4.1
as the result.


>> I don't see staying with this particular virtualization technology and
>> focusing on RAM as beneficial to resolving this.
>> I actually suspect that it's openVZ with it's rather old and problem
>> prone "virtualization" that is at the root of this. Or it's just a
>> highly overcommitted host and you get squeezed out by one of the other
>> tennants. Either way moving on is the answer.
>
> One of two things is going on here:
>
> 1) Network filtering external to the host.. I know this host has 
> something that does this on ALL clients for somethings.. they "leak" this 
> out if you dig about them... then they offer DDOS services, which I don't 
> use...
>
> BUT...
>
> I think after the past few days... something causes Icecast to FAIL, thus 
> it disappears.. respawns some how..

Icecast does not respawn by itself and none of the distributions I know
restart Icecast if it fails. Though the latter might be changing in the
future with the spread of systemd. Ubuntu 12.04 certainly doesn't in the
default installation.


>  and then clients which THINK they are 
> connected are NOT CONNECTED to the NEW SPAWN'd server thus are "offline" 
> unavailable...even though the source(s) think they are connected.

Looking at the start time/date of the Icecast process using e.g. "ps
aux" should answer when it was started.
Identifying the true Icecast server process might be tricky given those
ghosts. It would help to turn on the pid-file option of Icecast and
check the pid-file for the real PID.


> Any one know what scripts that the Icecast package in 12.04 64b installs 
> that starts on boot Icecast? I may nuke that, and see if it kills the 
> respawns... or even more so icecast has to be started manually... a PITA 
> but there is something to this respawn issue 

/etc/init.d/icecast2 is ONLY executed on boot/shutdown or manually.
There should be no need WHATSOEVER to mess with this.
I'd recommend cleaning up the on-connect/disconnect scripting situation
first.


>> I am quite sure, that you'll find switching to a host that uses a more
>> mature virtualization technology will help.
>> As you want to stay with 2.3.2, I'd suggest to find one that has Ubuntu
>> 12.04 
> A new host is on the way.

Let us know how that goes.



Thomas



More information about the Icecast mailing list