[Icecast-dev] why HLS/DASH are problematic in an Icecast context

"Thomas B. Rücker" thomas at ruecker.fi
Fri Feb 20 08:12:03 PST 2015

On 02/20/2015 04:05 PM, Eric Richardson wrote:
> On Fri, Feb 20, 2015 at 10:25 AM, Daniel James
> <daniel.james at sourcefabric.org <mailto:daniel.james at sourcefabric.org>>
> wrote:
>     Hi Thomas,
>     > Let's start with HLS:
>     > - It's not a standard. It's current status is that it's an *expired*
>     > draft[1].
>     Does that suggest a lack of interest in an open standard? 
> I'm actually not sure how it is expired... The most recent HLS draft
> was published in October and is valid through April:
> https://tools.ietf.org/html/draft-pantos-http-live-streaming-14

Thanks, I just followed the links that got me the previous -13 document
and that had expired.

> As someone who has been implementing both a server and client around
> HLS, it's actually been a little more active of a document than I
> would wish.
> Thomas has some valid questions around implementability for Icecast,
> but I wanted to take a second to provide a little context for why I as
> a broadcaster like HLS.
> I work for Southern California Public Radio, the largest NPR affiliate
> in the Los Angeles market. We're a small-to-medium online streamer,
> peaking at roughly 3k concurrent on an average day, but we think
> that's going to grow rapidly over the next few years and that Internet
> listening on mobile devices is the future. 
> For us, the short-lived connections in an HLS session seem to map more
> cleanly onto how people use their devices. We want them to start
> listening at home (WIFI) and keep listening as they hop in the car
> (maybe bouncing from 3G to LTE) and then arrive at the office (another
> WIFI network). That's 3-4 IP addresses over the course of a listening
> session. Because we don't have to keep one connection alive, HLS
> handles that on today's network routing realities (at least in
> theory... The implementation doesn't create a 100% perfect experience
> today).

That indeed is an aspect that I didn't think of. Thanks for pointing
that out!
I'm wondering though, what sort of caching does that need client side so
that it won't run dry? With the anecdotal network handover latencies
(including elevator black out) that I have here this would need to be
almost on the order of minutes.



More information about the Icecast-dev mailing list