[xiph-commits] r10371 - in websites/icecast.org: . loadtest
oddsock at svn.xiph.org
oddsock at svn.xiph.org
Mon Nov 14 19:09:51 PST 2005
Author: oddsock
Date: 2005-11-14 19:09:28 -0800 (Mon, 14 Nov 2005)
New Revision: 10371
Added:
websites/icecast.org/loadtest.php
websites/icecast.org/loadtest/LoadTest2_IdleCPU_vs_sources.png
websites/icecast.org/loadtest/LoadTest2_UserSystemIOWait_vs_sources.png
websites/icecast.org/loadtest/LoadTest2_UserSystem_vs_sources_and_listeners.png
websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources.png
websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources_and_listeners.png
websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_SystemCPU.png
websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_UserCPU.png
websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_freeMemory.png
websites/icecast.org/loadtest1.php
websites/icecast.org/loadtest2.php
websites/icecast.org/loadtest3.php
Removed:
websites/icecast.org/loadtest.php
Modified:
websites/icecast.org/news.php
Log:
new set of load tests....
Added: websites/icecast.org/loadtest/LoadTest2_IdleCPU_vs_sources.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest2_IdleCPU_vs_sources.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest2_UserSystemIOWait_vs_sources.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest2_UserSystemIOWait_vs_sources.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest2_UserSystem_vs_sources_and_listeners.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest2_UserSystem_vs_sources_and_listeners.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources_and_listeners.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest2_VSZ_vs_sources_and_listeners.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_SystemCPU.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_SystemCPU.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_UserCPU.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_UserCPU.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Added: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_freeMemory.png
===================================================================
(Binary files differ)
Property changes on: websites/icecast.org/loadtest/LoadTest3_Icecast_vs_Shoutcast_freeMemory.png
___________________________________________________________________
Name: svn:mime-type
+ application/octet-stream
Deleted: websites/icecast.org/loadtest.php
===================================================================
--- websites/icecast.org/loadtest.php 2005-11-15 00:36:34 UTC (rev 10370)
+++ websites/icecast.org/loadtest.php 2005-11-15 03:09:28 UTC (rev 10371)
@@ -1,151 +0,0 @@
-<? include "common/header.php"; ?>
-<h2>Icecast Load Test Results (by oddsock)</h2>
-<div class="roundcont">
-<div class="roundtop">
-<img src="/images/corner_topleft.jpg" class="corner" style="display: none" />
-</div>
-<br>
-<div class="newscontent">
-<h3>Description</h3>
-<br></br>
-<p>The purpose of this document is to report the findings of a load test that was performed on the 2.3 RC3 version of Icecast.</p>
-<p>This load test was not designed to be a complete and total analysis of how icecast behaves under load, but rather provide
-some insight into what happens to icecast when load (i.e. listeners) increases.</p>
-<p>The main goal was to answer the following questions :<br></br><br></br>
-* <b>Is there a maximum number of listeners that icecast can reliably handle ?</b><br></br>
-* <b>What kind of CPU utilization occurs in icecast configurations with large numbers of listeners ?</b><br></br>
-* <b>What does the network utilization look like for large numbers of listeners ?</b><br></br>
-</p>
-
-<h3>Test Hardware</h3>
-<p>In order to test a large number of listeners, I knew that the network would be the first limiting factor. So for this
-reason, I performed this test using gigabit ethernet (1000Mbit/sec) cards in each of the machines. </p>
-<p>There were 3 machines used in the test, 1 to run icecast (and icecast only), and 2 to serve as "listeners".</p>
-<p>The specs of each of the boxes were identical and looked like :</p>
-
-<p>
-Server: <b>Dell Poweredge 1850</b><br></br>
-Memory: <b>2GB</b><br></br>
-CPU : <b>3GHz Xeon (single processors running in hyperthreaded mode)</b><br></br>
-Network: <b>2 GBit Ethernet (although only one was used for the testing) connected via a GBit ethernet switch.</b><br></br>
-OS : <b>Red Hat Enterprise Linux 3 (2.4 kernel)</b><br></br>
-</p>
-<h3>The Load Test</h3>
-<p>We simulated listeners with the following script:</p>
-<p>
-<pre>
- #!/bin/sh
- #
- # run concurrent curls which download from URL to /dev/null. output total
- # and average counts to results directory.
- #
-
- # max concurrent curls to kick off
- max=7000
- # how long to stay connected (in seconds)
- duration=99999999
- # how long to sleep between each curl, can be decimal 0.5
- delay=10
- # url to request from
- URL=http://dwrac1:8500/stream.ogg
-
-
- #####
- #mkdir -p results
- echo > results
- while /bin/true
- do
- count=1
- while [ $count -le $max ]
- do
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- curl -o /dev/null -m $duration -s -w "bytes %{size_download} avg %{speed_download} " "$URL" >> results &
- [ "$delay" != "" ] && sleep $delay
- let count=$count+10
- done
- wait
- done
- echo done
-</pre>
-</p>
-<p>
-This script was run on each of the 2 load driving nodes. This script will incrementally add listeners to the icecast server at regular intervals;
-10 listeners would be added every 10 seconds (with 2 machines, that's a total of 20 listeners every 10 seconds). I ran this script for about 2 hours
-before it ended up forking too many processes for the load driving boxes.
-</p>
-<p>
-A note about configuration of the icecast box and the load drivers. For the load test I used a stock RHEL3 kernel and bumped up the
-max file descriptors to 30000 (default was 1024).
-</p>
-<p>
-In addition to the load driving script, I used a custom-written tool that I wrote for doing performance testing of my own. In this case, I just used
-the "data gathering" and graphing portion of it. With this tool I captured basic system stats from each machine along with the listener count on
-the icecast server. These stats were all correlated together and I created graphs to represent the data.
-</p>
-<p>
-For this test, only one stream was being sent to icecast, and it was being sourced using an OddcastV3 client sending a ~12kbps vorbis stream (Aotuvb4, quality -2, 11025Hz, Mono).
-</p>
-
-
-<h3>Results</h3>
-<p>
-The first graph shows user and system CPU usage for the box as a function of listeners.
-</p>
-<p>
-<img src="loadtest/cpu.jpg"><br></br>
-</p>
-<p>
-From this graph, you can see that the maximum number of listeners that I could simulate was about 14000. It is important to note
-that this is <em>not</em> a limitation of icecast, but rather of the hardware that I used. It can be seen that the total cpu utilization
-is about 20% at 14000 listeners, with a breakdown of ~ 15% system and 5% user CPU. It can also be seen that system and user CPU
-utilization basically follows a fairly linear progression upwards, so if you assume this, I would calculate the max number of listeners
-capable on a single icecast instance (given similarly sized hardware to mine) would be 14000 * 4 = 56000 (you probably don't want to run at
-> 80% cpu utilization).
-</p>
-<p>
-Speaking of network activity, the next graph shows network packets sent out by the icecast box as a function of listeners.
-</p>
-<p>
-<img src="loadtest/network.jpg"><br></br>
-</p>
-<p>
-It can be seen that network packets increase linearly with listeners just as CPU did. Note that these metric are in "packets", not in "bytes",
-so it's not exactly clear at what point the network would become saturated. However given that each client is retrieving a 12kbps stream, with 14000
-clients that would be 14000 * 12 = 168,000 bps + 10% TCP Overhead = ~ 184Mbps. So we had plenty of room for more listeners with a GBit card.
-And using our derived max of 56000 listeners and assuming the 12kbps stream rate that we used, that would mean :<br></br>
-56000 * 12 + 10% TCP Overhead = ~ 740Mbps.<br></br>
-Note that most broadcasters don't use 12kbps for a stream, so I would say that for MOST broadcasters, you will almost always be limited by your
-network interface. Definitely if you are using 10/100 equipment, and quite possibly even if using GBit equipment.
-</p>
-<h3>Conclusion</h3>
-<p>
-So to answer our questions : <br></br><br></br>
-* <b>Is there a maximum number of listeners that icecast can reliably handle ?</b><br></br>
-<i>Well, we know that it can definitely handle 14000 concurrent users given a similarly sized hardware configuration.
-We all conclude that icecast itself can handle even more concurrent users, with the main limitation most likely being
-the network interface.</i><br></br>
-* <b>What kind of CPU utilization occurs in icecast configurations with large numbers of listeners ?</b><br></br>
-<i>Looks like icecast follows a rather linear progression of cpu utilization with about 1/4 of the total CPU time spend in
-user time, and the other 3/4 in system time. For 14000 concurrent users I saw 20% total utilization, with a breakdown
-of 5% user and 15% system</i><br></br>
-<p>
-<Br></br>
-<Br></br>
-<Br></br>
-- oddsock : Wed Sep 21 16:28:23 CDT 2005
-</p>
-</div>
-<div class="roundbottom">
-<img src="/images/corner_bottomleft.jpg" class="corner" style="display: none" />
-</div>
-</div>
-<br><br>
Added: websites/icecast.org/loadtest.php
===================================================================
--- websites/icecast.org/loadtest.php 2005-11-15 00:36:34 UTC (rev 10370)
+++ websites/icecast.org/loadtest.php 2005-11-15 03:09:28 UTC (rev 10371)
@@ -0,0 +1,36 @@
+<? include "common/header.php"; ?>
+<h2>Icecast Load Tests (by oddsock)</h2>
+<div class="roundcont">
+<div class="roundtop">
+<img src="/images/corner_topleft.jpg" class="corner" style="display: none" />
+</div>
+<br>
+<div class="newscontent">
+<h3>Description</h3>
+<br></br>
+<p>This page contains links to the various load tests that were performed. If you have any detailed questions regarding
+these results, please feel free to stop by #icecast.</p>
+<br></br>
+<br></br>
+<p>
+<center>
+<table width="100%">
+<tr><td></td><td></td><td><b>Load test</b></td><td><b>Date performed</b></td><td><b>Icecast version</b></td></tr>
+<tr><td> </td><td><a href="loadtest1.php">view</a><td>Maximum listener test</td><td>September 22, 2005</td><td>2.3.0 RC3</td></tr>
+<tr><td> </td><td><a href="loadtest2.php">view</a><td>Maximum source/listener test</td><td>November 12, 2005</td><td>2.3.0 trunk (as of 11/14/2005)</td></tr>
+<tr><td> </td><td><a href="loadtest3.php">view</a><td>Icecast / Shoutcast comparassion</td><td>November 14, 2005</td><td>2.3.0 trunk (as of 11/14/2005) / shoutcast-1-9-5 linux glibc6</td></tr>
+</table>
+</center>
+</p>
+<p>
+<Br></br>
+<Br></br>
+<Br></br>
+- oddsock : Mon Nov 14 12:46:37 CST 2005
+</p>
+</div>
+<div class="roundbottom">
+<img src="/images/corner_bottomleft.jpg" class="corner" style="display: none" />
+</div>
+</div>
+<br><br>
Copied: websites/icecast.org/loadtest1.php (from rev 10048, websites/icecast.org/loadtest.php)
Added: websites/icecast.org/loadtest2.php
===================================================================
--- websites/icecast.org/loadtest2.php 2005-11-15 00:36:34 UTC (rev 10370)
+++ websites/icecast.org/loadtest2.php 2005-11-15 03:09:28 UTC (rev 10371)
@@ -0,0 +1,233 @@
+<? include "common/header.php"; ?>
+<h2>Icecast Load Test Results #2 (by oddsock)</h2>
+<div class="roundcont">
+<div class="roundtop">
+<img src="/images/corner_topleft.jpg" class="corner" style="display: none" />
+</div>
+<br>
+<div class="newscontent">
+<h3>Description</h3>
+<br></br>
+<p>
+Ok, here we go with another icecast load test. In this test we are going to try to answer the following questions :<br></br>
+* <b>How does adding additional sources affect icecast ?
+ We will look at a few metrics to try to answer this as complete as possible.</b><br></br>
+* <b>Are there any limits to the number of sources that can be hosted on a single icecast instance ?</b><br></br>
+* <b>What happens if we add a bunch of listeners to a high-source-count setup.</b><br></br>
+</p>
+
+<h3>Test Hardware</h3>
+<p>I used the same hardware as in the <a href="loadtest1.php">first load test</a>.</p>
+<p>
+2 of these : (one for running icecast, and one for sources/listeners)<br>
+Server: <b>Dell Poweredge 1850</b><br></br>
+Memory: <b>2GB</b><br></br>
+CPU : <b>3GHz Xeon (single processors running in hyperthreaded mode)</b><br></br>
+Network: <b>2 GBit Ethernet (although only one was used for the testing) connected via a GBit ethernet switch.</b><br></br>
+OS : <b>Red Hat Enterprise Linux 3 (2.4 kernel)</b><br></br>
+</p>
+<h3>The Load Test(s)</h3>
+<p>For this particular load test we used the same listener load script from the <a href="loadtest1.php">first load test</a>,
+and added a new script for creating source clients.
+<p>We created mountpoints with the following script:</p>
+<p>
+<pre>
+ #!/bin/sh
+ #
+ max=700
+ # how long to sleep between each set of ezstreams, can be decimal 0.5
+ delay=2
+
+ echo > out
+ count=1
+ while [ "$count" -le "$max" ]
+ do
+ ezstream -c conf/ezstream_mp3_$count.xml >> out &
+ ezstream -c conf/ezstream_vorbis_$count.xml >> out &
+ let count=$count+1
+ ezstream -c conf/ezstream_mp3_$count.xml >> out &
+ ezstream -c conf/ezstream_vorbis_$count.xml >> out &
+ let count=$count+1
+ ezstream -c conf/ezstream_mp3_$count.xml >> out &
+ ezstream -c conf/ezstream_vorbis_$count.xml >> out &
+ let count=$count+1
+ ezstream -c conf/ezstream_mp3_$count.xml >> out &
+ ezstream -c conf/ezstream_vorbis_$count.xml >> out &
+ let count=$count+1
+ ezstream -c conf/ezstream_mp3_$count.xml >> out &
+ ezstream -c conf/ezstream_vorbis_$count.xml >> out &
+ let count=$count+1
+ [ "$delay" != "" ] && sleep $delay
+ done
+ wait
+ echo done
+</pre>
+</p>
+<p>
+This script creates 10 new source client connections (5 mp3 and 5 vorbis) every 2 seconds. The config files (exstream_*.xml)
+were all precreated and were all identical with the exception of the mountpoint (which was a counter from 1 to 1400).
+These ezstream instances looped the same file over and over (metadata updates were perform in between each loop - this
+is automatically done by ezstream).
+</p>
+
+<p><b><font size=+1 color="yellow">First Test: Source Client Ramp Up To 1400</font></b></p>
+
+<p>
+The idea here is to ramp up (in a regular fashion) the number of source clients to a value of 1400.
+Why 1400 ? Well, 1400 sources ended up with a VSZ of 1GB, and while I probably could have gone higher, the purpose of
+this test was not to see the max number of sources that *can* be attached, but rather can a large number of them be
+attached without icecast saturating somewhere, and what kind of memory/cpu was taken up by such a "large number" source test.
+</p>
+<p>
+So let us try to determine how much memory is taken by each source client.
+We will measure this by looking at the VSZ value for icecast (as reported by ps -aux).
+</p>
+
+<p>The first graph shows VSZ and source count.</p>
+<img src="loadtest/LoadTest2_VSZ_vs_sources.png"><br></br>
+<p>
+<br></br>
+So lets take a look at some of the raw data, here is the VSZ and source number values, along with a delta
+representing the different between the current and previous VSZ value divided by the number of sources that were added.
+This effectively represents the KB overhead per source. <i>Note that VSZ is reported in KB.</i>
+</p>
+<br></br>
+<center>
+
+<table border=1 width="80%" cellspacing=0 cellpadding=0>
+<tr><td bgcolor="#111111"><b>VSZ Icecast</b></td><td bgcolor="#111111"><b>Number of Sources</b></td><td bgcolor="#111111"><b>Delta per source</b></td></tr>
+<tr><td>10648</td><td>0.00</td><td></td></tr>
+<tr><td>21512</td><td>20.00</td><td>543.20</td></tr>
+<tr><td>28012</td><td>30.00</td><td>650.00</td></tr>
+<tr><td>41836</td><td>50.00</td><td>691.20</td></tr>
+<tr><td>48308</td><td>60.00</td><td>647.20</td></tr>
+<tr><td>61388</td><td>80.00</td><td>654.00</td></tr>
+<tr><td>68008</td><td>90.00</td><td>662.00</td></tr>
+<tr><td>79772</td><td>110.00</td><td>588.20</td></tr>
+<tr><td>88440</td><td>120.00</td><td>866.80</td></tr>
+<tr><td>100336</td><td>140.00</td><td>594.80</td></tr>
+<tr><td>107704</td><td>150.00</td><td>736.80</td></tr>
+<tr><td>118568</td><td>170.00</td><td>543.20</td></tr>
+<tr><td>128520</td><td>180.00</td><td>995.20</td></tr>
+<tr><td>140900</td><td>200.00</td><td>619.00</td></tr>
+<tr><td>147112</td><td>210.00</td><td>621.20</td></tr>
+<tr><td>158020</td><td>230.00</td><td>545.40</td></tr>
+<tr><td>168340</td><td>240.00</td><td>1032.00</td></tr>
+<tr><td>180912</td><td>260.00</td><td>628.60</td></tr>
+<tr><td>186864</td><td>270.00</td><td>595.20</td></tr>
+<tr><td>200940</td><td>290.00</td><td>703.80</td></tr>
+<tr><td>207288</td><td>300.00</td><td>634.80</td></tr>
+<tr><td>219812</td><td>320.00</td><td>626.20</td></tr>
+<tr><td>227348</td><td>330.00</td><td>753.60</td></tr>
+<tr><td>234200</td><td>350.00</td><td>342.60</td></tr>
+<tr><td>247384</td><td>360.00</td><td>1318.40</td></tr>
+<tr><td>254232</td><td>380.00</td><td>342.40</td></tr>
+<tr><td>267556</td><td>400.00</td><td>666.20</td></tr>
+<tr><td>274164</td><td>410.00</td><td>660.80</td></tr>
+<tr><td>287348</td><td>430.00</td><td>659.20</td></tr>
+<tr><td>294196</td><td>440.00</td><td>684.80</td></tr>
+<tr><td>306760</td><td>460.00</td><td>628.20</td></tr>
+<tr><td>313476</td><td>470.00</td><td>671.60</td></tr>
+<tr><td>327072</td><td>490.00</td><td>679.80</td></tr>
+<tr><td>334944</td><td>500.00</td><td>787.20</td></tr>
+<tr><td>346900</td><td>520.00</td><td>597.80</td></tr>
+<tr><td>354636</td><td>530.00</td><td>773.60</td></tr>
+<tr><td>366304</td><td>550.00</td><td>583.40</td></tr>
+<tr><td>374184</td><td>560.00</td><td>788.00</td></tr>
+<tr><td>387084</td><td>580.00</td><td>645.00</td></tr>
+<tr><td>394820</td><td>600.00</td><td>386.80</td></tr>
+<tr><td>406204</td><td>610.00</td><td>1138.40</td></tr>
+<tr><td>414984</td><td>630.00</td><td>439.00</td></tr>
+<tr><td>427420</td><td>640.00</td><td>1243.60</td></tr>
+<tr><td>434136</td><td>660.00</td><td>335.80</td></tr>
+<tr><td>447712</td><td>670.00</td><td>1357.60</td></tr>
+<tr><td>454556</td><td>690.00</td><td>342.20</td></tr>
+<tr><td>466984</td><td>700.00</td><td>1242.80</td></tr>
+<tr><td>474740</td><td>720.00</td><td>387.80</td></tr>
+<tr><td>488184</td><td>740.00</td><td>672.20</td></tr>
+<tr><td></td><td></td><td></td></tr>
+<tr><td></td><td bgcolor="#111111"><b>Average Delta Per Source</b></td><td bgcolor="#111111"><b>693.91</b></td></tr>
+</table>
+
+</center>
+
+<p>Ok, that's probably enough data to make a conclusion. Looks like each new addition of a source client is
+causing an increase of (on average) 672.20KB of memory in the icecast process. This
+means we'd expect an increase of 1400*.672 = .95GB in the VSZ for icecast compared to its initial value. From our graph we can see
+the initial value of ~ 15MB and a final value of .95GB for a net increase in the range that we were expecting.
+</p>
+
+<p>Next let us look at CPU used by icecast. The following graph shows the CPU utilization (user/system/wait io) for the box running icecast.
+It is important
+to note that only icecast was running on this box, so any cpu activity is associated with icecast.
+Also note that initially, I used PCT_CPU as returned by 'ps -augx', however it was determined that
+this value was not representative of actual PCT_CPU of the process. Thus I opted to used the cpu
+utilization of the machine itself to get a more accurate representation.
+</p>
+<center>
+<img src="loadtest/LoadTest2_UserSystemIOWait_vs_sources.png"><br></br>
+</center>
+<br></br>
+<p>From this graph you can see that total cpu went from about 5% (2.5% user/ 2.5% system) up to about 20% (10% user / 10% system)
+during the course of the sources attaching.
+There is a general upward trend in increasing CPU time, however, it is fairly regular and relatively small.
+So we'd conclude that icecast is doing a very efficient job in handling this number of sources.
+<br></br>
+<br></br>
+<p><b><font size=+1 color="yellow">Second Test: Adding Listeners to the 1400 source client config</font></b></p>
+<p>This test is really a combination of the first test I did (max listeners) and this test (max sources) to
+see how the system reacts to a system with a good number of listeners and sources combined. To start off, source
+clients were ramped up to 1400 (just like in the <b>First Test</b>), and then I ramped up to 5600 concurrent listeners.
+These listeners were randomly distributed across the 1400 available attached mountpoints and connected and stayed
+connected for the duration of the test. Why 5600 ? The machine that I was running listeners on ended up running out of resource
+(number of processes) at that point.</p>
+<p>Let us first take a look at the icecast process VSZ (Virtual Memory Size).</p>
+<br></br>
+<center>
+<img src="loadtest/LoadTest2_VSZ_vs_sources_and_listeners.png"><br></br>
+</center>
+<p>In the above diagram, the sources are represented by a red line and listeners by a blue (red shaded area is the VSZ of icecast).
+As can be seen, the VSZ of
+the icecast process significantly increases with the addition of new sources (this is something that we found and discussed
+above in this document) but that it remains flat upon the addition of new listeners. This is good news for those
+of you who want to have a low memory footprint but still want a large number of listeners.</p>
+<br></br>
+<p>Let us now look at system CPU as a function of these two metrics (source count and listener count). The next graph shows
+system CPU utilization (User/System/IOWait) as reported by vmstat.</p>
+<br></br>
+<center>
+<img src="loadtest/LoadTest2_UserSystem_vs_sources_and_listeners.png"><br></br>
+</center>
+<p>The effects are fairly subtle in this diagram, but what is shown is a relatively constantly increasing cpu
+utilization during the source client rampup, and a similar increase in CPU utilization
+during the listener ramp up. The slope of the line representing the increase in CPU utilization seems to be steeper
+with the source ramp up as compared to the listener ramp up. This would imply that the addition of new sources will imply
+a larger CPU utilization than the addition of new listeners. This is exactly what we would expect.
+<p>So what did we learn from this test ? We saw that icecast memory does not appreciably increase with the addition of
+listeners (which it does with the addition of sources). We also saw that CPU utilization is slightly higher for processing
+sources than listeners.</p>
+</p>
+<h3>Conclusions</h3>
+<p>
+<center><i><b>How does adding additional sources affect icecast ?</b></i></center><br></br>
+<p>We found that each source client adds about 672KB to the virtual size of the icecast process. This ends up being the
+the limiting factor when determining the number of sources you can host on a single icecast instance.</p>
+<center><i><b>Are there any limits to the number of sources that can be hosted on a single icecast instance ?</b></i></center><br></br>
+<p>Certainly there are, I ran into a limit of 1400 on the hardware I used, with the addition of additional RAM I would have
+been able to increase that value.</p>
+<center><i><b>What happens if we add a bunch of listeners to a high-source-count setup.</b></i></center><br></br>
+<p>We found that listeners do not seem to affect icecast too much with regard to memory, and that it adds
+less load to the CPU as do sources. In the case of listeners, the network will be your limiting factor (which we
+found and reported in our previous load test).</p>
+</p>
+<Br></br>
+<Br></br>
+<Br></br>
+- oddsock : Sat Nov 12 10:44:00 CST 2005 - © 2005 Ed Zaleski.
+</p>
+</div>
+<div class="roundbottom">
+<img src="/images/corner_bottomleft.jpg" class="corner" style="display: none" />
+</div>
+</div>
+<br><br>
Added: websites/icecast.org/loadtest3.php
===================================================================
--- websites/icecast.org/loadtest3.php 2005-11-15 00:36:34 UTC (rev 10370)
+++ websites/icecast.org/loadtest3.php 2005-11-15 03:09:28 UTC (rev 10371)
@@ -0,0 +1,107 @@
+<? include "common/header.php"; ?>
+<h2>Icecast Load Test Results #3 (by oddsock)</h2>
+<div class="roundcont">
+<div class="roundtop">
+<img src="/images/corner_topleft.jpg" class="corner" style="display: none" />
+</div>
+<br>
+<div class="newscontent">
+<h3>Description</h3>
+<br></br>
+<p>
+This load test was performed to get an idea as to how well icecast performs relative to Shoutcast. The test
+used for this comparassion is the "Max Listener" test that we used for load test #1. In order to get the most
+accurrate results, I decided to re-run the max listener test completely for icecast (instead of just taking the
+numbers from the previous test.
+</p>
+
+<h3>Test Hardware</h3>
+<p>I used the same hardware as in the <a href="loadtest1.php">first load test</a> and <a href="loadtest2.php">second load test</a>.</p>
+<p>
+3 of these : (one for running icecast, and 2 for listeners)<br>
+Server: <b>Dell Poweredge 1850</b><br></br>
+Memory: <b>2GB</b><br></br>
+CPU : <b>3GHz Xeon (single processors running in hyperthreaded mode)</b><br></br>
+Network: <b>2 GBit Ethernet (although only one was used for the testing) connected via a GBit ethernet switch.</b><br></br>
+OS : <b>Red Hat Enterprise Linux 3 (2.4 kernel)</b><br></br>
+</p>
+<p>In both tests (icecast and Shoutcast) I used a single 48kbps mp3 stream sourced from Oddcast running on an XP machine on my local
+LAN.</p>
+<h3>The Load Test</h3>
+<p>For this particular load test we used the same listener load script from the <a href="loadtest1.php">first load test</a>.
+</p>
+<p>
+I ran this script of both of the listener nodes (each could start a max of ~7000 listeners) for a combined listener count of
+~14000. As in the previous tests, listeners were added in a "ramp-up" fashion up to the total of 14000 listeners.
+</p>
+<p>
+After the execution of this test against both icecast and Shoutcast, I chose the following stats to compare against each server :<br></br>
+<br></br>
+* User CPU
+<br></br>
+* System CPU
+<br></br>
+* Free Memory
+<br></br>
+</p>
+<p>These are machine-level statistics, and as before with previous tests, only icecast/Shoutcast were running on the main node
+and thus all system metrics can be attributed to icecast/Shoutcast.</p>
+
+<h4>User CPU</h4>
+<p>The first graph we will look at is User CPU. Listeners are along the X axis and User CPU along the Y.</p>
+<center>
+<img src="loadtest/LoadTest3_Icecast_vs_Shoutcast_UserCPU.png"><br></br>
+</center>
+<p>
+This graph shows that icecast certainly uses less User CPU as compared to Shoutcast, taking up about 50% less cpu than Shoutcast does for
+a similar listener count. Icecast 1, Shoutcast 0.
+</p>
+<br></br>
+<br></br>
+<br></br>
+<h4>System CPU</h4>
+<p>Next, we will look at System CPU.</p>
+<center>
+<img src="loadtest/LoadTest3_Icecast_vs_Shoutcast_SystemCPU.png"><br></br>
+</center>
+<p>
+This one is a little bit harder to draw a conclusion about. Icecast and Shoutcast are pretty much identical System CPU-wise, until about
+10,000 listeners, where we see a slight increase in System CPU time relatively for icecast. However, if we look at the total
+graph of system CPU for icecast, we see a very linear progression. With Shoutcast we see a plateau in System CPU right about 10,000
+listeners. Without going into more analysis I would conclude from this that Shoutcast is most likely not keeping up with the
+listeners appropriately, and thus we do not see an increase in system CPU. So we'll call this one a draw.
+</p>
+<br></br>
+<br></br>
+<br></br>
+<h4>Free Memory</h4>
+<p>And finally...Free Memory</p>
+<center>
+<img src="loadtest/LoadTest3_Icecast_vs_Shoutcast_freeMemory.png"><br></br>
+</center>
+<p>
+Wow, this one is certainly striking. Shoutcast must allocate a major chunk of memory for each listener or (shudder) allocate
+a thread for each listener. Either way, icecast clearly wins this one.
+</p>
+<br></br>
+<br></br>
+<br></br>
+
+<h3>Conclusions</h3>
+<p>
+<center><i><b>How does icecast compare to Shoutcast in terms of handling listeners ?</b></i></center><br></br>
+<p>
+Well, looks like icecast handles them better all around taking about 50% of User CPU, about the same System CPU, and certainly
+much much fewer amounts of memory per user.
+</p>
+<Br></br>
+<Br></br>
+<Br></br>
+- oddsock : Mon Nov 14 12:43:56 CST 2005 - © 2005 Ed Zaleski.
+</p>
+</div>
+<div class="roundbottom">
+<img src="/images/corner_bottomleft.jpg" class="corner" style="display: none" />
+</div>
+</div>
+<br><br>
Modified: websites/icecast.org/news.php
===================================================================
--- websites/icecast.org/news.php 2005-11-15 00:36:34 UTC (rev 10370)
+++ websites/icecast.org/news.php 2005-11-15 03:09:28 UTC (rev 10371)
@@ -4,6 +4,28 @@
<img alt="" src="/images/corner_topleft.jpg" class="corner" style="display: none" />
</div>
<div class="newscontent">
+<h3>More Load Test Reports</h3>
+<p>
+We've done another round of load testing, this time going through a "large number of sources" test
+as well as a comparassion test with Shoutcast.
+</p>
+<p>All our load testing reports can be found <a href="loadtest.php">Here</a></p>
+<div class="poster">
+Posted November 14, 2005 by oddsock
+</div>
+</div>
+<div class="roundbottom">
+<img alt="" src="/images/corner_bottomleft.jpg" class="corner" style="display: none" />
+</div>
+</div>
+<br>
+<br>
+
+<div class="roundcont">
+<div class="roundtop">
+<img alt="" src="/images/corner_topleft.jpg" class="corner" style="display: none" />
+</div>
+<div class="newscontent">
<h3>Icecast Release 2.3.0</h3>
<p>We are pleased to announce the next release of Icecast.
</p>
More information about the commits
mailing list