[foms] Proposal: adaptive streaming using open codecs
Philip Jägenstedt
philipj at opera.com
Thu Nov 18 05:50:01 PST 2010
On Thu, 18 Nov 2010 02:27:43 +0100, Silvia Pfeiffer
<silviapfeiffer1 at gmail.com> wrote:
> On Thu, Nov 18, 2010 at 5:15 AM, Timothy B. Terriberry
> <tterriberry at mozilla.com> wrote:
>>> Can you explain what's a LSP extrapolation ? A quick google doesn't
>>> give anything.
>>
>> Probably because I meant LPC (linear prediction coefficients), not LSP
>> (line spectral pairs). The concepts are related.
>>
>> Vorbis, for example, already uses LPC extrapolation to pad the first and
>> last MDCT blocks, since blocks are overlapped, but there's no real data
>> to overlap with at the beginning and end of a track. Code starts at
>> https://trac.xiph.org/browser/trunk/vorbis/lib/block.c#L416
>
>
> Would a media framework typically expose that level of API? How would
> a decoder feed a player with audio data such that this works? Would it
> just make sure to hand over the next chunk of data or is there any
> decoding pipeline setup change required or something?
I can speak for the two media frameworks I have experience with: GStreamer
and DirectShow. As far as I've seen, neither have any high-level support
for stitching together audio streams. Rather, you'll need to have
something like a joiner element in the decoding pipeline into which you
feed all audio streams. Usually it will do nothing, but at when one stream
ends and the other begins it'll stitch them. The audio sink would only see
one audio stream coming in (in other words, the joiner would also have to
resample). Not sure if it'll work well in practice, I haven't tried
implementing anything yet.
--
Philip Jägenstedt
Core Developer
Opera Software
More information about the foms
mailing list