[Icecast-dev] listen backlog patch
Micheil Smith
micheil at brandedcode.com
Fri Feb 20 03:53:30 PST 2015
For HLS/DASH, I'm fairly certain SCPR tried running a chunking proxy in front of Icecast but found it not ideal; would be good to get their experiences in here.
— Micheil
On 20 Feb 2015, at 11:45, "Thomas B. Rücker" <thomas at ruecker.fi> wrote:
> Hi,
>
> On 02/19/2015 03:40 PM, Stephan Leemburg wrote:
>> I don't know if you like top or bottom quoting. That seems to be a
>> big-little endian thing ;-)
>>
>> So, I will top quote and inline quote.
>
> playing it safe, ha!
>
>> Please see my comments inline, below.
>>
>> Kind regards,
>> Stephan
>>
>>
>> On 02/19/2015 04:18 PM, "Thomas B. Rücker" wrote:
>>> Hi,
>>>
>>> On 02/19/2015 03:07 PM, Stephan Leemburg wrote:
>>>> Hello Icecast-dev,
>>>>
>>>> I am new to this list.
>>> Welcome!
>> Thank you.
>>
>>>> I am working for the NPO, the Dutch Public Broadcasting agency.
>>>> We do a lot of icecast streaming. We run at least 20 icecast server
>>>> instances on our media streaming cluster.
>>> That's very nice to hear.
>>>
>>>> We ran into an issue that clients which where connecting to our streams
>>>> seemed to be 'hanging' on the connection setup frequently. The client
>>>> 'thinks' it is connected, but no data.
>>>>
>>>> People suggested that it probably had to do with the poll() call. So, I
>>>> looked into that.
>>>>
>>>> I found that the issue was actually caused by the very low listen
>>>> backlog (5).
>>>>
>>>> On our clusters, we typically set this to 8192. Yes it is high, but we
>>>> do a _lot_ of streaming and host very high volume websites.
>>> I'm not very familiar with socket programming, so will let Philipp
>>> comment on this. Interesting enough this issue hasn't come up so far as
>>> far as I can tell and there are some pretty high load deployments out there.
>> We often have 'bursts' of new connections. Due to something said on a
>> website, radio, tv, some top-2000 end of year event, etc. And when we
>> get a lot of simultanious connection requests, this becomes an issue.
>>
>> I wrote a small server to simulate it. And the linux backlog acts a
>> little bit different then expected.
>
> In the context of what you mention just below, yes, I'm immediately
> willing to believe that there is a problem. The scale also explains why
> it's probably not that commonly encountered.
>
>
>>> If you can share that info, what sort of concurrent listener load are we
>>> talking about?
>> Sure I can share, we are a national broadcasting agency funded by tax
>> money. So no secrets here ;-)
>>
>> I just asked the media streaming guys.. 70k icecast connections on a
>> regular day and 150k around special broadcasts (like top 2000 around new
>> year).
>
> That's rather sizable.
> There are two things I'd like to bring up in this context:
>
> * We'd be delighted if you/NPO could share a few things that you learned
> are important to deploy Icecast at that scale. Raising the ulimit is
> rather obvious, but you might have run into other things.
>
> * Also if there are more issues, we'd like to hear them, as we want to
> make Icecast even better.
>
> I personally believe in the simplicity of HTTP streaming.
> I've looked at HLS/DASH and there are various issues that make it
> unnecessarily hard to deal with for little value in return.
>
> Especially in web browsers the <audio> element starts to shape up and
> support simple HTTP streams rather well. The issue of supported codecs
> remains though, with Opus being a decent long run candidate.
>
> The main focus for Icecast in this context would be to help achieve
> better listener support on mobile devices, which traditionally seem to
> favour HLS. Large stations/networks just create an app that wraps things
> nicely, but there are many smaller ones. I see VLC as a good candidate
> in this context, but for that we also need to improve our stream
> directory at http://dir.xiph.org, as it currently only exports one big
> XML file. We're hoping to work out a good and flexible JSON API and help
> player-software projects integrate it. It's also part of our GSoC ideas.
>
>
>>>> Currently we are using icecast 2.3. We are migrating to 2.4.
>>>> So, I have written patches for 2.3 and 2.4, but also for the current 2.5
>>>> git tree.
>>>>
>>>> Unfortunately, I am a newby when it comes to git (sorry). But I do have
>>>> unified diff patch files for the 2.3, 2.4 and 2.5 source trees.
>>>>
>>>> The patched 2.4 icecast was tested by our media streaming team and they
>>>> confirmed that their issue was solved by it.
>>>>
>>>> Can I submit them (and how)?
>>> Just send them as attachments to this list, or open a ticket over at
>>> https://trac.xiph.org.
>>> If trac is naughty and thinks you're a spammer, please let me know.
>>>
>>> If there are differences between them, then please the 2.5 and 2.4
>>> patches. Otherwise 2.5 will do just fine.
>> I have attached the 2.4 and 2.5 patches.
>
> Thanks, Philipp took a look and we see the problem. Addressing this in a
> way that doesn't open us by default to DoS will be the main challenge. A
> likely outcome would be exposing the value in the configuration and only
> documenting it in the "very advanced" part.
>
> I'd expect that we'll address this one way or the other soon and it will
> be part of 2.5.0 later this year.
>
>
>>> Thanks a lot for taking the time to reach out to us!
>> Thank you for your Open Source contributions and efforts!
>> And as it is Open Source, we can fix problems ourself and share.
>
> As said we deeply appreciate that, it's what makes open source software
> better for everyone.
>
>
> Cheers
>
> Thomas
>
> _______________________________________________
> Icecast-dev mailing list
> Icecast-dev at xiph.org
> http://lists.xiph.org/mailman/listinfo/icecast-dev
More information about the Icecast-dev
mailing list