[Icecast] On bitrate and time [WAS: Re: limit-rate]

Philipp Schafft phschafft at de.loewenfelsen.net
Mon Nov 5 10:26:51 UTC 2018


Good morning,

thank you very much for your feedback!

On Sun, 2018-11-04 at 15:41 +0000, Paul Martin wrote:
> On Sun, Nov 04, 2018 at 11:54:16AM +0000, Thomas B. Rücker wrote:
> 
> > That's not a version.
> > That's completely different software at this point.
> > It's also not Xiph.org, but published by Karl.
> 
> It is a desirable feature, though.

It isn't. And here is why:

There are several problems with such a limit. Let's start by how to
detect it:
        Media streaming has inherently changes in bitrate. This happens
        due to mainly three factors:
              * For any modern codec the codec tries to compress the
                actual information within the signal. As the amount of
                information varies over time the bitrate changes. This
                is something that happends on the tens of milliseconds
                to the hundreds of seconds scale.
              * Metadata is transported alongside actual data. Such
                metadata includes not only title updates but also
                updates for encoder settings. Metadata updates are
                point-in-time events that can have from a few byte (e.g.
                title updates), or a few more bytes (encoder settings)
                up to a few MB (huge cover art).
              * Framing related modulation. Example of this are e.g.
                filling container frames or TCP corking. The scale of
                this depends on the bitrate (yes, this is the x* = f(x)
                problem class.). It's normally on the hundreds of
                milliseconds to few seconds scale.
        To calculate a good bitrate estimation (and that is what it is,
        an estimation as we are bound to causality) we need at least a
        few seconds of window. And even that will hardly work for
        point-in-time events as they represent Dirac impulses (which can
        be hardly detected as they are flatted into bitrate "plateau"s
        by the physical limits of the transport).

        So, detecting bitrate is already on the scale of several
        seconds, maybe minutes. Not very helpful for most usecases in a
        close-to-realtime eco system.

Next let's look at what this represents:
        The bitrate depends on a lot of factors. It is relevant for
        transport capacity planing (e.g. how much listeners you can
        handle on a given connection). It is also relevant for buffer
        control (e.g. burst size, how long may transmission breaks
        are, ...).
        
        However it does not directly represent time. One might think
        that:
        t_segment = segment size / bitrate.
        However as shown above this only works for t_segment being very
        long.
        
        Please let me also note that bitrate does not correlate directly
        to quality. But just like it is a bit more complex with time it
        is with quality. So maybe this is a topic for another E-Mail.

Let's look on what we would do with the information:
        You suggested to implement some "limit". So the question is what
        to do when this limited (measured in any way) is reached?
        Possibilities are:
              * We could make a log entry. This could be helpful for
                debugging. Maybe. While I see some use cases I think
                it's not worth the work to implement it. If you're
                interested you can use any download tool (browser, wget,
                curl, ...) to have a life reading!
              * We could drop the stream as it uses more resources than
                agreed on. Sounds like a feature for stream providers.
                Yet with all the problems of measuring it, it would
                likely generate false positives OR be too relaxed. From
                all I know about streaming provides they are more
                interested in stability of the service than minor
                accounting errors. (And they can still use the total
                number of served bytes for accounting. Which would be
                the best way to do it anway!)
              * We could pause the stream to throttle the bitrate to
                it's limit. (This is what you suggest, if I understand
                you correctly.) This would somehow work. For a small
                class of problems: Hardly any metadata in streams,
                hardly changing information within the signal (read:
                noise, and boring music; do NOT read: speech,
                fade-in/fade-out, dynamic music, periods of
                silence, ...), and controlled framing, and transport. If
                implemented, reality will come sooner than later and
                break such a setup in one way or another.

General notes about throttling:
        All kinds of throttling add another clock to the system. This
        works fine with static files (as they do not provide a clock
        themselfs). So they end up with exactly one clock (which is what
        you want) (see also below).
        
        It does however not work well for live streamed content as it
        already has a clock signal. It will result in a
        man-with-two-watches problem PLUS that you will all the time use
        both. Also note that in reality clocks are in the 'bad' state
        (they report the wrong time). E.g. clock drift will make the
        error grow over time until at some point the buffers can not
        compensate. Clock errors in reality are normally up to 1%.
        However I have seen 5%.
        
        As you only talk about static files this is not much of a
        problem for you. However if there is a general option it will
        become for other people.

> The use case is a server with one or more external feeds, where those
> feeds can be intermittent.  You want a fallback-mount to a static file
> for when the feed drops, but you also want the listeners to go back to
> the live feed soon after it returns.
> 
> Unfortunately, the way Icecast2 works with static files is that it
> feeds as much as the listener's player software buffer can take,
> meaning a huge spike in bandwidth use when it falls back to a static
> file, and (more importantly) a huge lag in returning to the correct
> feed when that reconnects.
> 
> The way I'm working round this at the moment is to have an instance of
> liquidsoap on the same server as Icecast, encoding a single static
> file as a set of continually running fallback-mount feeds, with the
> same encoder settings as the feeds they're guarding.  This is wasteful
> in resources (memory and CPU) for what could be pre-encoded static
> files if Icecast had some sort of rate limitation on feeding out
> static files.

About your actual problems:
We are currently running a project with some external partner that will
future versions of Icecast allow to run format-aware *time based*
throttling for fserv (static files) content (including fallbacks). This
will be a feature post Icecast 2.5 beta3.

If you're in need for a more swift solution feel free to write me
off-list to discuss options.

I hope that this E-Mail helps you and also our dear fellows reading this
list.

With best regards,

-- 
Philipp Schafft (CEO/Geschäftsführer) 
Telephon: +49.3535 490 17 92

Löwenfelsen UG (haftungsbeschränkt)     Registration number:
Bickinger Straße 21                     HRB 12308 CB
04916 Herzberg (Elster)                 VATIN/USt-ID:
Germany                                 DE305133015
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: This is a digitally signed message part
URL: <http://lists.xiph.org/pipermail/icecast/attachments/20181105/b1ede9ec/attachment.sig>


More information about the Icecast mailing list