Hi Michael,<br><br>Thank you..............<br>I got the point.<br><br><div class="gmail_quote">On Feb 7, 2008 3:01 PM, Michael Smith <<a href="mailto:msmith@xiph.org">msmith@xiph.org</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Nuwan,<br><br>I think you're just misunderstanding the meaning of some of these tables.<br><br>Figure 6.2 is a graphical representation of this bitstream, and the<br>text of section 6.2 describes precisely how to decode it.<br>
<br>The table you refer to (pages 40-41) is, as the header says, the<br>output parameters of this procedure - NSBS, for example, can be<br>calculated from other values that _are_ read from the bitstream, so it<br>isn't explicitly present there.<br>
<br>Mike<br><div><div></div><div class="Wj3C7c"><br>On Feb 7, 2008 9:19 AM, Nuwan Millawitiya <<a href="mailto:millawitiya@gmail.com">millawitiya@gmail.com</a>> wrote:<br>> Hi,<br>> While creating identification header in the function<br>
> theora_encode_header in encoder_toplevel.c, it assigns bits not mentioned in<br>> the current theora spec released on Octomber 29, 2007 (page 40 &41).<br>><br>> But this implementation in function theora_encode_header is correct<br>
> according to the Figure 6.2 (page 42). But not according to the table<br>> mentioned in pages 40 & 41.<br>><br>> For example, in spec it has mentioed header has NSBS (use 32 bits).<br>> But it has not implemented in theora_encode_header.<br>
><br>> What is the reason?<br>> What is happening here?<br>><br>><br>> --<br>> Nuwan Millawitiya<br></div></div>> _______________________________________________<br>> theora-dev mailing list<br>
> <a href="mailto:theora-dev@xiph.org">theora-dev@xiph.org</a><br>> <a href="http://lists.xiph.org/mailman/listinfo/theora-dev" target="_blank">http://lists.xiph.org/mailman/listinfo/theora-dev</a><br>><br>><br>
</blockquote></div><br><br clear="all"><br>-- <br>Nuwan Millawitiya