[Icecast] On low latency

Matt Morris mwpmorris at gmail.com
Thu Jun 30 18:35:50 UTC 2022


+1, yes many thanks Philipp. This is the exact talk that I was disappointed in missing out on so very valuable information.

Matt Morris.

Sent from my iPhone

> On 30 Jun 2022, at 16:49, Dennis Heerema <dennis at heerema.net> wrote:
> 
> 
> Hallo Philipp,
> 
> Thank you very much for sharing this.
> 
> 
> Kind regards,
> 
> Dennis
> 
> Op 30 jun. 2022 11:40 schreef "Philipp Schafft (phschafft at de.loewenfelsen.net)" <phschafft at de.loewenfelsen.net>:
> Good morning,
> 
> over at Löwenfelsen we asked on LinkedIn what people think how low they
> can go with Icecast and latency. As I think this is also interesting
> for this list I want to share the results with you. Also going more
> into technical details here as this list is more tech focused.
> 
> We asked what is possible?: Less than 10s, less than 1s, less than
> 100ms, or less than 10ms. What do you think?
> 
> 
> So let's have a look:
> There are a number of values that add up to the total latency.
> 
> 
> The first one is the network latency. This is basically the time it
> takes for any information to travel from the source to the sink
> (listener) on the network. There are two limiting factors here: the
> network access on both ends and the speed of light once you reached the
> backbone level.
> 
> Network access delay depends very much on the network access technology
> used (e.g. DSL, cable modem, power line, LTE, ...) as well as the ISP
> and it's configuration. The values here dropped a lot. When I started
> with Icecast it was more like 60..100ms in Germany now it is more like
> 2..10ms on wired connections.
> 
> In most cases your source client is connected via a good, nearly
> backbone level network. So you can ignore that. However if you for
> example have small studio that has just a consumer grade uplink you
> need to keep that in mind.
> 
> Once you reached the backbone information will flow at about 1/3 of the
> speed of light. It depends on where you are and where you want to send
> to. But the above worked as a rule of thumb of me. So add about
> 1ms/100km.
> 
> 
> You can consider this part of the latency unchangeable as it is
> directly based on the physics of our universe.
> 
> There is a little more, see jitter a little later in this mail.
> 
> 
> The next part is signal generation and rendering delay. This basically
> means codec delays, delays of your sound hardware, delays of all the
> other hardware (such as your RAM, your CPU, your PCI bus, ...).
> 
> This part is a bit under your control: You can use more modern
> components and get a lower value. But all this also depends on both
> physics and how we understand it. It is a area of huge amounts of
> research and development.
> 
> So basically you add up all the numbers: sound card delay, sound card
> interface delay, software delay, codec delay, network interface
> delay,...
> 
> Most of those will be in the microsecond range so we can ignore it.
> However sound cards, software, and codecs have a significant delay. At
> least for codecs this got down a lot over the last 20 years. So
> depending on your configuration you can reach values below 50ms.
> 
> The same applies for your listener. However e.g. codec delay which is a
> large part of the source side's delay is normally much smaller on the
> decoder side. But on the other hand you may not have that nice
> processional sound card but something random adding more delay again.
> 
> 
> Now there are two parts left, the delay by Icecast and the delay by the
> listener software. So let's have a look at Icecast:
> 
> We had some tests (the results were in our last presentation) on the
> delay within Icecast. Basically Icecast does forward data as soon as it
> gets it. (I'm not sure where that myth Icecast would do some buffering
> comes from.) But I think nobody really measured that before. So we did.
> And in all our tests Icecast forwarded the data in less than 500µs. Now
> please also keep in mind that this was done on multi-tasking operating
> systems (both servers and desktops) so other things where going on as
> well. Meaning that Icecast is subject of being blocked by other
> processes as part of the normal operation of the operating system. And
> this is what I have seen in the numbers.
> 
> 
> What So the last significant part is the buffer in the listener client.
> In reality this buffer is > 90% of the latency you get. Which is good
> news actually:
> If you control the listener client (e.g. the listener is using your
> App) you are on full control over that buffer. So you can select any
> value you like.
> 
> However there are a few limitations here to keep in mind:
>  * Network naturally jitters. The jitter is the difference in time it
>    takes for two packets to travel via the network. It is no delay by
>    itself as when summed up you will always get a sum of zero. However
>    for smooth playback the listener client must keep at least so much
>    buffer to handle the worst expected jitter. This is what that
>    listener buffer was made for initially. 20 years ago a value of e.g.
>    8 seconds seemed reasonable here. Today I would say that on wired
>    networks a value of 500ms..1500ms seams reasonable. Lower in
>    controlled environments.
>  * Mobile networks come with dead spots. The listener's buffer helps
>    with them as well as they look like jitter. Dead spots can be from a
>    few milliseconds to tens of seconds. So again: Select a value that
>    gives the best tradeoff for your usecase.
>    Bigger buffer: more
>    reliable, more delay
>    Smaller buffer: less reliable, less delay
>  * Browsers are very bad as media players. You will often have a hardtime to really control them. So just because you added a elements doesn't mean that you have playback under control.
> 
> 
> 
> So on the conclusion:
> Icecast provides low latency forwarding of data. In the area of reliabl
> e streams ("the music never stops") it provides the lowest latency
> possible by laws of physics.
> 
> What latency you can expect depends on your setup and configuration.
> Are you more optimistic? Or do you want to play more conservative? Do
> you have your network, hardware, and listener clients under control? Or
> maybe only parts of that? What kind of network anyway?
> 
> In reality It seems like the values you can get are around 20..500ms
> plus the listener playback buffer (which can range from a few
> milliseconds to a few seconds).
> 
> Also keep in mind that the numbers normally look much worse than they
> are: Sound travels at about 343m/s (air, normal conditions). So every
> ms more latency you have corresponds to someone sitting 34.3cm more
> away from the speaker. So dancing to the music in your living room adds
> another 15ms...
> 
> 
> Hope this post was of interest. Also congratulations to everyone who
> made it to the end.
> 
> If anyone is interested on how we measured it feel free to drop me a
> mail off-list. Also if you're a CDN and want some measurements done or
> have other questions, feel free to contact us as well.
> 
> 
> With best regards,
> 
> -- 
> Philipp Schafft (CEO/Geschäftsführer) 
> Telephon:  +49.3535 490 17 92
> Website:   https://www.loewenfelsen.net/
> Follow us: https://www.linkedin.com/company/loewenfelsen/
> 
> Löwenfelsen UG (haftungsbeschränkt)     Registration number:
> Bickinger Straße 21                     HRB 12308 CB
> 04916 Herzberg (Elster)                 VATIN/USt-ID:
> Germany                                 DE305133015
> 
> _______________________________________________
> Icecast mailing list
> Icecast at xiph.org
> http://lists.xiph.org/mailman/listinfo/icecast
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xiph.org/pipermail/icecast/attachments/20220630/c27dd8f1/attachment.htm>


More information about the Icecast mailing list