[foms] Proposal: adaptive streaming using open codecs
slhomme at matroska.org
Sat Nov 20 02:08:35 PST 2010
On Thu, Nov 18, 2010 at 2:50 PM, Philip Jägenstedt <philipj at opera.com> wrote:
> On Thu, 18 Nov 2010 02:27:43 +0100, Silvia Pfeiffer
> <silviapfeiffer1 at gmail.com> wrote:
>> On Thu, Nov 18, 2010 at 5:15 AM, Timothy B. Terriberry
>> <tterriberry at mozilla.com> wrote:
>>>> Can you explain what's a LSP extrapolation ? A quick google doesn't
>>>> give anything.
>>> Probably because I meant LPC (linear prediction coefficients), not LSP
>>> (line spectral pairs). The concepts are related.
>>> Vorbis, for example, already uses LPC extrapolation to pad the first and
>>> last MDCT blocks, since blocks are overlapped, but there's no real data
>>> to overlap with at the beginning and end of a track. Code starts at
>> Would a media framework typically expose that level of API? How would
>> a decoder feed a player with audio data such that this works? Would it
>> just make sure to hand over the next chunk of data or is there any
>> decoding pipeline setup change required or something?
> I can speak for the two media frameworks I have experience with: GStreamer
> and DirectShow. As far as I've seen, neither have any high-level support
> for stitching together audio streams. Rather, you'll need to have
> something like a joiner element in the decoding pipeline into which you
> feed all audio streams. Usually it will do nothing, but at when one stream
> ends and the other begins it'll stitch them. The audio sink would only see
> one audio stream coming in (in other words, the joiner would also have to
> resample). Not sure if it'll work well in practice, I haven't tried
> implementing anything yet.
As far as audio goes, that would mean that the sampling frequency
should not change during playback. Or, if allowed, the max sampling
freq/channels should be known before starting the decoding pipe to be
ready to handle whatever comes in and allow for the highest quality.
More information about the foms