[vorbis-dev] Parallelism

Jeff Squyres jsquyres at lsc.nd.edu
Fri Aug 18 15:43:37 PDT 2000



On Fri, 18 Aug 2000, Gregory Maxwell wrote:

> I don't understand why you would need multiple cpus doing encoding if
> all you are encoding is a single track or single CD. It's just silly,
> and a waste of time. People who are already doing bulk encoding
> (encoded over 1000 CDs into mp3, about to begin again redoing it all
> w/ vorbis) are perfectly happy with the one track or one disk to cpu

I disagree (and we may well have to agree to disagree).

Most users are lazy.  I'd be willing to bet that the majority of users
would rather have to do as little as possible to get the results faster
and better.  More power! [Tim Allen grunts]  If a user can find a
multi-threaded or otherwise parallel program that will encode their CDs
faster, they'll use that rather than having to kludge up pmake and/or some
scripts, or have multiple windows open on a screen.

And what about the encode-a-single-CD scenario?  If you could encode the
entire CD while spanning multiple CPUs with single program (no scripts, no
pmake, no multiple windows, etc.), wouldn't you prefer that?  Most users
are not developers -- they want a minimum of fuss, nor would they even
understand how to do any of that stuff, either.

No one has even mentioned the PR value of all this.  Consider how fast a
multi-threaded encoder will be if you run it on a 4 way SMP as compared to
any serial MP3 encoder out there (yes, apples and oranges, but since when
has the press cared? ;-).  Or mixing another buzzword -- Beowulf -- with
vorbis.  Vorbis has a lot to compete with in MP3, and a lot of that has to
do with public perception (regardless of the facts).  If vorbis has
*blazing fast* encoders (and who cares why or how they work), that's a big
PR checkmark IMHO.

> method. The human loading the disks into the drives and checking the
> CDDB results (and doing complete entry in the frequent case where the
> CDDB data is missing or very inaccurate) is the limiting factor and
> all the CPUs and MPI parallel encoders in the world can't help.

CDDB entries are not a limiting factor -- bad entries can even be fixed
after the fact (write a 10 line perl script).  Indeed, the catagorization
of the resulting encoded files is orthogonal to the actual encoding
process.  It can be as time consuming as the user wants (if you want
perfection, it will take a lot of time, if you take just the raw CDDB
output -- buggy as it may be -- it takes no time at all).  So saying that
categorizing your encoded files won't be sped up with parallelism is
meaningless.

Ripping can be *much* faster than encoding.  Logically speaking, you have
a producer (or producers) and a consumer (or consumers).  The ripper(s)
produce the input, and the encoder(s) consume the input.  Since one side
is clearly faster than the other, I don't see why trying to speed up the
other side is a Bad Thing.

Sidenote: one of the reasons that parallel programming was created for big
number crunching.  And that's that encoding is.  Hence, encoding can
benefit from parallelism.  Yes, you can do "manual" parallelism, but a) I
still maintain it's not as efficient, and b) it's clearly more
user-intensive that way (more blinken-lights to monitor, etc.).

> If you have code to do it with minimal impact to the source base, then
> great. Until then the discussion should stop.

If you look at my previous posts (and one was a reply to you), I
volunteered to look into this as well as contribute code (I was actually
diving in vorbis code when your post arrived).  The whole point of an
active development community is to discuss and talk about these things;
why stifle creativity?

{+} Jeff Squyres
{+} squyres at cse.nd.edu
{+} Perpetual Obsessive Notre Dame Student Craving Utter Madness
{+} "I came to ND for 4 years and ended up staying for a decade"

--- >8 ----
List archives:  http://www.xiph.org/archives/
Ogg project homepage: http://www.xiph.org/ogg/



More information about the Vorbis-dev mailing list