[Icecast] Education - 1, 000s, 100, 000's, Millions of listeners. (What kind of infrastructure)
Wayne Barron
wayne at cffcs.com
Wed Mar 20 22:05:58 UTC 2024
Tom and Frederick.
Thank you both for your input. It is greatly appreciated, not only by me,
but I am sure for many others who find this thread.
Tom - using Linux server for icecast.
HLS was brought to my attention a while back on the liquidsoap forum.
I have not had a chance to completely look in on it, but do plan on
checking it out.
Tom. You have your icecast servers in a round robin configuration.
Are you using NGINX for that?
I have watched several YouTube videos on using it for doing round robin as
well as ssl and other things.
I understand that once a scenario of say 10,000 or more, that I will have
to look into either a bigger infrastructure on my side or leasing space on
the cloud somewhere.
Now, if I did do the cloud.
Would that be to host the servers or just the files or everything?
Wayne
On Wed, Mar 20, 2024, 4:50 PM <thomas.zumbrunnen at gmail.com> wrote:
> Dear all
>
>
>
> My 5 cents (or Rappen in CH) if it comes to serving many clients.
>
> We are running a 4 node cluster since several years – rock solid and w/o
> any issues. This cluster serves many thousands of listeners from all over
> the world. Our source transcoder sending the audio streams to each Node.
> Hence, transcoding power is not an issue here. The four Nodes a
> geographically dispersed in 3 countries in Europe. In our case each Node
> is running Debian with icecast and has 10Gbit connectivity with brilliant
> worldwide peerings. Good peering is key, choose your ISP wisely 😊.Each
> icecast servers has the same multi domain ssl cert. which allows us to
> deliver to several customers (each customer a subdomain) the cluster is
> round robin load balanced by using AWS Route53. This approach may can be
> achived also with other DNS Providers like Cloudflare. For example, if one
> node need to be taken down for maintenance, Route53 throws the Node out of
> the DNS automatically. This will be achived with “health checks” This
> mechanism is pretty fast and responsive. If a client gets disconnected and
> tries a reconnect, the RR DNS is passing the client immediately to a
> working Node. No issues here as well. It just works.
>
>
>
> In the beginning we’ve experienced similar issues even though, the
> bandwidth capacity of the VM’s was never the root cause. We’ve identified
> some solutions:
>
>
>
> 1. Linux TCP stack tuning. Cloudflare has many studies published in
> their Blogs about this. But you will find a lot about this tuning also in
> the interweb
> 2. Consider to bake your own kernel, which is tuned for high
> throughput – goes in line with TCP Stack tunning
> 3. Tune the Linux open file limits and adjust the init start script
> for the icecast server. Example : start icecast with ulimit -c unlimited
> and ulimit -n 32768
> 4. Consider to use FreeBSD instead of Linux. FreeBSD has the better
> TCP stack out of the box.
> 5. If all of this is not feasible for you, just add a new Node to the
> cluster and level the amount of clients to more Nodes.
>
>
>
> For the points 1. and 2. I can’t give you a “out of the box” solution or
> default settings. It’s an iteration process: adjusting, trial, load
> testing, monitoring and <repeat>
>
> Because the result will need to fit your requirements, therefore every
> setup might need a different tuning. And btw. do not try using icecast on
> Windows Servers, if you need to serve a lot of clients 😊
>
>
>
> Happy icecasting
>
> tom
>
>
>
> *Von:* Icecast <icecast-bounces at xiph.org> *Im Auftrag von *Fred Gleason
> *Gesendet:* Mittwoch, 20. März 2024 20:53
> *An:* Icecast streaming server user discussions <icecast at xiph.org>
> *Betreff:* Re: [Icecast] Education - 1, 000s, 100, 000's, Millions of
> listeners. (What kind of infrastructure)
>
>
>
> On Mar 20, 2024, at 13:16, Wayne Barron <wayne at cffcs.com> wrote:
>
>
>
> In Windows and Linux web servers, we can create a forest for our web
> servers.
> Send traffic to different servers to even the workload.
>
> Can we do something like this with the Icecast servers?
> (or)
> Will we have to install new VMs, add the heavy stations on that one,
> and send the new traffic there?
>
>
>
> Ok, I’m going to be “that guy”…
>
>
>
> I would argue that, as soon as you’ve hit an audience size of 10,000 or
> more (especially if that audience is at all geographically dispersed),
> IceCast is basically off the table. The reason why can be summarized in
> three letters: “CDN” [Content Distribution Networks].
>
>
>
> To fan out to large, geographically dispersed audiences of 10,000 or more
> (not to mention 100k’s or, Lord help us, 1M’s or more), you need to get
> content cached in locations that are geographically close to your
> listeners. By far the easiest (read: most cost effective) way to do this at
> scale is to leverage the already existing infrastructure of CDNs (companies
> like Akamai or CloudFlare, that have a world-wide footprint). That means
> using streaming formats that utilize segmented distribution mechanisms,
> such as HLS or DASH. You can kinda-sorta do this sort of thing with IceCast
> by using relays, but it’s complex to configure and monitor while being not
> well supported at many CDNs (Akamai for example discontinued their IceCast
> product offering several years ago). HLS OTOH plays very well with that
> infrastructure because it’s effectively just a bunch of static files that
> get replicated via HTTP[S]. No special “server” software is required;
> bog-standard Apache or Nginx work just fine, because the complex “media
> handling” bits have been intentionally pushed to the endpoints; namely the
> encoder and (especially) the players. Today though, when FOSS HLS audio
> encoders are available and pretty much every browser supports playing HLS
> content natively, the complexity angle can be largely ignored by content
> creators.
>
>
>
> Just my take. That, and 2 € will get you a (cheap) cup of coffee…
>
>
>
> Cheers!
>
>
>
>
>
> |---------------------------------------------------------------------|
>
> | Frederick F. Gleason, Jr. | Chief Developer |
>
> | | Paravel Systems |
>
> |---------------------------------------------------------------------|
>
> | All progress is based upon a universal innate desire of every |
>
> | organism to live beyond its income. |
>
> | |
>
> | -- Samuel Butler |
>
> | "Notebooks" |
>
> |---------------------------------------------------------------------|
>
>
> _______________________________________________
> Icecast mailing list
> Icecast at xiph.org
> http://lists.xiph.org/mailman/listinfo/icecast
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xiph.org/pipermail/icecast/attachments/20240320/187fc23b/attachment.htm>
More information about the Icecast
mailing list