[Tremor] Some issues with Nicholas' comment truncation patch
Ethan Bordeaux
ethan.bordeaux at gmail.com
Thu Jan 22 12:30:43 PST 2009
FYI I'm also looking at this problem right now. I too need to be able to
ignore comments of arbitrarily large size. The problem that I have is that
I've currently statically allocated for a maximum of 7 ogg buffers in
_fetch_buffer(). This works fine for small comments (and all decoding needs
I've run into thus far) but large comment headers make additional requests
to get more ogg buffers. I think that in the comment decode phase Ogg will
allocate buffers continuously to hold all the metadata, which I can't
handle.
My thought was to short-circuit this behavior so that, if we're in the
comment gathering phase, all data reads go to a dummy ogg_buffer. I would
probably add the code in ogg_buffer_alloc() to check if this is a real
request for a new buffer or if we should map to the dummy buffer/reference.
I'm pretty sure I can safely find the start and end points of this phase by
setting a flag right after _vorbis_unpack_info finishes and clearing the
flag after _vorbis_unpack_comment finishes. This is a pretty big kludge but
I think it would work.
Thoughts?
On Thu, Jan 22, 2009 at 10:28 AM, Andy Grundman <andy at slimdevices.com>wrote:
> OK, I need to make some more test files and see if any of them can be
> read properly. I may use your patch anyway because it at least avoids
> allocating tons of memory and crashing. Even if the file doesn't play
> it will fail more gracefully than it currently does.
>
> On Jan 22, 2009, at 10:18 AM, Nicholas Vinen wrote:
>
> >
> > Unfortunately while in many cases the patch works, I've found that it
> > has problems with certain files. It's a bit of a hack and I don't yet
> > fully understand why it doesn't work.
> >
> > A much better solution would be to examine the header of any packet
> > which exceeds a certain size and is not yet fully read in yet, and
> > determine the type. If it's the comment field, it should be possible
> > to
> > process what's already been read in and sync up to the next packet
> > without reading the rest in at all, rather than what I did which is to
> > prevent the current packet in memory growing once it reaches a
> > certain size.
> >
> > So in summary I think somebody needs to come up with a better
> > solution.
> > I will do it at some point but I don't know when that will be, it
> > could
> > be a while. Someone who understands the Ogg packet code better would
> > likely do a good job. I don't yet understand it fully.
> >
> > You may be able to get my code to work but as it is unfortunately I
> > don't think it's 100%. And yes you may be right, the later patch may
> > have broken the large comment skipping.
> >
> >
> > Nicholas
> >
> >
> >
> > Andy Grundman wrote:
> >> Hi Nicholas,
> >>
> >> I'm working on implementing your comment truncation patch into our
> >> Tremor code. I have a few questions about it.
> >>
> >> Comparing Tremor3.patch [3] and your original patch [1], #3 seems to
> >> have removed the extra code in _vorbis_unpack_comment that breaks out
> >> of the comment reading loop if the data was truncated. Using patch
> >> #3
> >> this loop always tries to allocate space for the truncated comment
> >> and
> >> fails, so I think what you did in #1 is the right solution. However,
> >> using #1 I end up failing on my test file with ENOTVORBIS because the
> >> check for the "vorbis" string in vorbis_dsp_headerin fails when
> >> looking for the codebooks header.
> >>
> >> Thanks,
> >> -Andy
> >>
> >> [1] http://x256.org/~hb/Tremor.patch<http://x256.org/%7Ehb/Tremor.patch>
> >> [2] http://x256.org/~hb/Tremor3.patch<http://x256.org/%7Ehb/Tremor3.patch>
>
> _______________________________________________
> Tremor mailing list
> Tremor at xiph.org
> http://lists.xiph.org/mailman/listinfo/tremor
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xiph.org/pipermail/tremor/attachments/20090122/62914c9b/attachment.htm
More information about the Tremor
mailing list