[xiph-commits] r10039 - websites/icecast.org
msmith at svn.xiph.org
msmith at svn.xiph.org
Thu Sep 22 06:51:04 PDT 2005
Author: msmith
Date: 2005-09-22 06:51:02 -0700 (Thu, 22 Sep 2005)
New Revision: 10039
Modified:
websites/icecast.org/loadtest.php
Log:
De-typoify.
Modified: websites/icecast.org/loadtest.php
===================================================================
--- websites/icecast.org/loadtest.php 2005-09-22 13:45:41 UTC (rev 10038)
+++ websites/icecast.org/loadtest.php 2005-09-22 13:51:02 UTC (rev 10039)
@@ -12,7 +12,7 @@
<p>This load test was not designed to be a complete and total analysis of how icecast behaves under load, but rather provide
some insight into what happens to icecast when load (i.e. listeners) increases.</p>
<p>The main goal was to answer the following questions :<br></br><br></br>
-* <b>Is there a maximum amount of listeners that icecast can reliably handle ?</b><br></br>
+* <b>Is there a maximum number of listeners that icecast can reliably handle ?</b><br></br>
* <b>What kind of CPU utilization occurs in icecast configurations with large numbers of listeners ?</b><br></br>
* <b>What does the network utilization look like for large numbers of listeners ?</b><br></br>
</p>
@@ -20,7 +20,7 @@
<h3>Test Hardware</h3>
<p>In order to test a large number of listeners, I knew that the network would be the first limiting factor. So for this
reason, I performed this test using gigabit ethernet (1000Mbit/sec) cards in each of the machines. </p>
-<p>There were 3 machines used in the test, 1 to run icecast (and icecast only), and 2 to server as "listeners".</p>
+<p>There were 3 machines used in the test, 1 to run icecast (and icecast only), and 2 to serve as "listeners".</p>
<p>The specs of each of the boxes were identical and looked like :</p>
<p>
@@ -78,7 +78,7 @@
</pre>
</p>
<p>
-This script was run on each of the 2 load driving nodes. This script will incrementally add listeners to the icecast server at regular intervals
+This script was run on each of the 2 load driving nodes. This script will incrementally add listeners to the icecast server at regular intervals;
10 listeners would be added every 10 seconds (with 2 machines, that's a total of 20 listeners every 10 seconds). I ran this script for about 2 hours
before it ended up forking too many processes for the load driving boxes.
</p>
@@ -89,7 +89,7 @@
<p>
In addition to the load driving script, I used a custom-written tool that I wrote for doing performance testing of my own. In this case, I just used
the "data gathering" and graphing portion of it. With this tool I captured basic system stats from each machine along with the listener count on
-the icecast server. These stats were all coorelated together and I created graphs to represent the data.
+the icecast server. These stats were all correlated together and I created graphs to represent the data.
</p>
<p>
For this test, only one stream was being sent to icecast, and it was being sourced using an OddcastV3 client sending a ~12kbps vorbis stream (Aotuvb4, quality -2, 11025Hz, Mono).
@@ -104,15 +104,15 @@
<img src="loadtest/cpu.jpg"><br></br>
</p>
<p>
-From this graph, you can see that the maxmimum number of listeners that I could simulate was about 14000. It is important to note
-that this is NOT a limitation of icecast, but rather just of the hardware that I used. It can be seen that the total cpu utilization
+From this graph, you can see that the maximum number of listeners that I could simulate was about 14000. It is important to note
+that this is <em>not</em> a limitation of icecast, but rather of the hardware that I used. It can be seen that the total cpu utilization
is about 20% at 14000 listeners, with a breakdown of ~ 15% system and 5% user CPU. It can also be seen that system and user CPU
utilization basically follows a fairly linear progression upwards, so if you assume this, I would calculate the max number of listeners
capable on a single icecast instance (given similarly sized hardware to mine) would be 14000 * 4 = 56000 (you probably don't want to run at
> 80% cpu utilization).
</p>
<p>
-Speaking of network activity, the next graph shows network packes sent out by the icecast box as a function of listeners.
+Speaking of network activity, the next graph shows network packets sent out by the icecast box as a function of listeners.
</p>
<p>
<img src="loadtest/network.jpg"><br></br>
@@ -123,13 +123,13 @@
clients that would be 14000 * 12 = 168,000 bps + 10% TCP Overhead = ~ 184Mbps. So we had plenty of room for more listeners with a GBit card.
And using our derived max of 56000 listeners and assuming the 12kbps stream rate that we used, that would mean :<br></br>
56000 * 12 + 10% TCP Overhead = ~ 740Mbps.<br></br>
-Note that most broadcasters don't use 12kbps for a stream, so I would say that for MOST broadcasters, you will almost always be limmited by your
+Note that most broadcasters don't use 12kbps for a stream, so I would say that for MOST broadcasters, you will almost always be limited by your
network interface. Definitely if you are using 10/100 equipment, and quite possibly even if using GBit equipment.
</p>
<h3>Conclusion</h3>
<p>
So to answer our questions : <br></br><br></br>
-* <b>Is there a maximum amount of listeners that icecast can reliably handle ?</b><br></br>
+* <b>Is there a maximum number of listeners that icecast can reliably handle ?</b><br></br>
<i>Well, we know that it can definitely handle 14000 concurrent users given a similarly sized hardware configuration.
We all conclude that icecast itself can handle even more concurrent users, with the main limitation most likely being
the network interface.</i><br></br>
More information about the commits
mailing list