<div dir="ltr">Hi Jean-Marc,<div><br></div><div>Thanks for your suggestions. Will get back to you once we have some updates.</div><div><br></div><div>Linfeng</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Feb 6, 2017 at 5:47 PM, Jean-Marc Valin <span dir="ltr"><<a href="mailto:jmvalin@jmvalin.ca" target="_blank">jmvalin@jmvalin.ca</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Linfeng,<br>
<span class=""><br>
On 06/02/17 07:18 PM, Linfeng Zhang wrote:<br>
> This is a great idea. But the order (psEncC->shapingLPCOrder) can be<br>
> configured to 12, 14, 16, 20 and 24 according to complexity parameter.<br>
><br>
> It's hard to get a universal function to handle all these orders<br>
> efficiently. Any suggestions?<br>
<br>
</span>I can think of two ways of handling larger orders. The obvious one is<br>
simply to add an inner loop of the form:<br>
for (i=0;i<order;i+=VECTOR_SIZE)<br>
I think what may be more efficient is to simply have a small "order-N"<br>
(N=4 or 8) kernel that not only computes the correlation of order N, but<br>
then spits out the signal after the N-stage all-pass is applied. The<br>
kernel would look like:<br>
<br>
void autocorr_kernel4(int *corr, int *orig, int *input, int *output, int<br>
len) {<br>
/* Implement vectorized order-4 filter (could also be order 8)<br>
as described in previous email and outputs the filtered signal.<br>
*/<br>
}<br>
<br>
and then the full function would run the kernel multiple times and look<br>
like:<br>
<br>
void full_autocorr(int *corr, int *orig, int len, int order) {<br>
int i;<br>
int tmp[MAX_SIZE];<br>
int *in = orig;<br>
for (i=0;i<order;i+=4) {<br>
autocorr_kernel4(corr+i, orig, in, tmp, len);<br>
/* Make subsequent calls use the filtered signal as input. */<br>
in = tmp;<br>
}<br>
}<br>
<br>
I think the should not only reduce/eliminate the prologue/epilogue<br>
problem, but it should also be more efficient since almost all vectors<br>
processed would use the full size.<br>
<br>
Maybe a third option (not sure it's a good idea, but still mentioning<br>
it) would be to have a function that hardcodes order=24 and discards the<br>
larger values that aren't needed. Since the smallest order seems to be<br>
16, it wouldn't be much of a waste and the code might end up running<br>
faster for the higher orders.<br>
<br>
Cheers,<br>
<br>
Jean-Marc<br>
<span class=""><br>
<br>
> Thanks,<br>
> Linfeng<br>
><br>
> On Mon, Feb 6, 2017 at 12:40 PM, Jean-Marc Valin <<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a><br>
</span><div><div class="h5">> <mailto:<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a>>> wrote:<br>
><br>
> Hi Linfeng,<br>
><br>
> On 06/02/17 02:51 PM, Linfeng Zhang wrote:<br>
> > However, the critical thing is that all the states in each stage when<br>
> > processing input[i] are reused by the next input[i+1]. That is<br>
> > input[i+1] must wait input[i] for 1 stage, and input[i+2] must wait<br>
> > input[i+1] for 1 stage, etc.<br>
><br>
> That is indeed the tricky part... and the one I think you could do<br>
> slightly differently. If you approach the problem in terms of computing<br>
> chunks of the inputs N samples at a time, then indeed the approach you<br>
> are describing is the only solution. What I was proposing though is to<br>
> instead chop the "order" in chunks of N. Using your notation, you would<br>
> be doing:<br>
><br>
> PROC( in0(s0))<br>
> PROC( in0(s1) in1(s0))<br>
> PROC( in0(s2) in1(s1) in2(s0))<br>
> PROC( in0(s3) in1(s2) in2(s1) in3(s0))<br>
> PROC( in0(s4) in1(s3) in2(s2) in3(s1) in4(s0))<br>
> PROC( in0(s5) in1(s4) in2(s3) in3(s2) in4(s1) in5(s0))<br>
> PROC( in0(s6) in1(s5) in2(s4) in3(s3) in4(s2) in5(s1) in6(s0))<br>
> PROC(in0(s7) in1(s6) in2(s5) in3(s4) in4(s3) in5(s2) in6(s1) in7(s0))<br>
> PROC(in1(s7) in2(s6) in3(s5) in4(s4) in5(s3) in6(s2) in7(s1) in8(s0))<br>
> PROC(in2(s7) in3(s6) in4(s5) in5(s4) in6(s3) in7(s2) in8(s1) in9(s0))<br>
> PROC(in3(s7) in4(s6) in5(s5) in6(s4) in7(s3) in8(s2) in9(s1)in10(s0))<br>
> PROC(in4(s7) in5(s6) in6(s5) in7(s4) in8(s3) in9(s2)in10(s1)in11(s0))<br>
> ...and so on until the end of the input vector<br>
><br>
> The difference is that it's now the input vector that "slides" and the<br>
> "state" values sy that remain in the same place. There's still a<br>
> prologue, but you can easily get rid of it by (implicitly) zero-padding<br>
> the in vector during the initialization phase (start with a zero vector<br>
> and real one value at a time). Getting rid of the epilogue is a little<br>
> trickier, but I think it can be done.<br>
><br>
> Cheers,<br>
><br>
> Jean-Marc<br>
><br>
> > Then it becomes this<br>
> ><br>
> > FOR in=0 to N WITH in+=8<br>
> > PROC(in0(s0)) /* prolog 0 */<br>
> > PROC(in0(s1) in1(s0)) /* prolog 1 */<br>
> > PROC(in0(s2) in1(s1) in2(s0)) /* prolog 2 */<br>
> > PROC(in0(s3) in1(s2) in2(s1) in3(s0)) /* prolog 3 */<br>
> > PROC(in0(s4) in1(s3) in2(s2) in3(s1) in4(s0)) /* prolog 4 */<br>
> > PROC(in0(s5) in1(s4) in2(s3) in3(s2) in4(s1) in5(s0)) /* prolog 5 */<br>
> > PROC(in0(s6) in1(s5) in2(s4) in3(s3) in4(s2) in5(s1) in6(s0)) /*<br>
> > prolog 6 */<br>
> > PROC(in0(s7) in1(s6) in2(s5) in3(s4) in4(s3) in5(s2) in6(s1)<br>
> in7(s0))<br>
> > /* fully process 8 inputs */<br>
> > PROC(in0(s8) in1(s7) in2(s6) in3(s5) in4(s4) in5(s3) in6(s2)<br>
> in7(s1))<br>
> > /* continue */<br>
> > PROC(in0(s9) in1(s8) in2(s7) in3(s6) in4(s5) in5(s4) in6(s3)<br>
> in7(s2))<br>
> > /* continue */<br>
> > PROC(in0(s10) in1(s9) in2(s8) in3(s7) in4(s6) in5(s5) in6(s4)<br>
> in7(s3))<br>
> > /* continue */<br>
> > PROC(in1(s10) in2(s9) in3(s8) in4(s7) in5(s6) in6(s5) in7(s4)) /*<br>
> > epilog 0 */<br>
> > PROC(in2(s10) in3(s9) in4(s8) in5(s7) in6(s6) in7(s5)) /* epilog<br>
> 1 */<br>
> > PROC(in3(s10) in4(s9) in5(s8) in6(s7) in7(s6)) /* epilog 2 */<br>
> > PROC(in4(s10) in5(s9) in6(s8) in7(s7)) /* epilog 3 */<br>
> > PROC(in5(s10) in6(s9) in7(s8)) /* epilog 4 */<br>
> > PROC(in6(s10) in7(s9)) /* epilog 5 */<br>
> > PROC(in7(s10)) /* epilog 6 */<br>
> > END FOR<br>
> ><br>
> > And<br>
> > PROC(in0(s7) in1(s6) in2(s5) in3(s4) in4(s3) in5(s2) in6(s1)<br>
> in7(s0))<br>
> > /* fully process 8 inputs */<br>
> > PROC(in0(s8) in1(s7) in2(s6) in3(s5) in4(s4) in5(s3) in6(s2)<br>
> in7(s1))<br>
> > /* continue */<br>
> > PROC(in0(s9) in1(s8) in2(s7) in3(s6) in4(s5) in5(s4) in6(s3)<br>
> in7(s2))<br>
> > /* continue */<br>
> > is actually the expansion of the kernel loop<br>
> > FOR i=0 TO order-6 WITH i++<br>
> > PROC(in0(si+7) in1(si+6) in2(si+5) in3(si+4) in4(si+3) in5(si+2)<br>
> > in6(si+1) in7(si+0))<br>
> > END FOR<br>
> ><br>
> > The worst thing is that corr_QC[] is so sensitive that any extra<br>
> > processing will make them wrong and propagate to the next loop (next 8<br>
> > inputs). state_QS[] is a little better but still very sensitive. For<br>
> > instance, if adding PROC(in0(s11') in1(s10) in2(s9) in3(s8) in4(s7)<br>
> > in5(s6) in6(s5) in7(s4)) to the kernel loop (by looping one more time)<br>
> > and remove epilog 0, then all final results will be wrong.<br>
> ><br>
> > That's why the prolog and epilog cannot be saved to the best of my<br>
> > knowledge.<br>
> ><br>
> > The assembly size of silk_warped_autocorrelation_<wbr>FIX_neon() is about<br>
> > 2,744 bytes. Compared with the C code size (about 452 bytes), it's 2.3<br>
> > KB larger. Considering silk_warped_autocorrelation_<wbr>FIX_c() is the<br>
> second<br>
> > place CPU heavy function in fixed-point, and our testing shows up<br>
> to 7%<br>
> > CPU run time saving of the total encoder with this optimization (at<br>
> > Complexity 8), maybe we can take the I-cache burden even if finally we<br>
> > still cannot remove the big chunk of prolog and epilog.<br>
> ><br>
> > Thanks,<br>
> > Linfeng Zhang<br>
> ><br>
> > On Sat, Feb 4, 2017 at 4:17 PM, Jean-Marc Valin<br>
> <<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a> <mailto:<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a>><br>
</div></div><div><div class="h5">> > <mailto:<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a> <mailto:<a href="mailto:jmvalin@jmvalin.ca">jmvalin@jmvalin.ca</a>>>> wrote:<br>
> ><br>
> > Hi Felicia,<br>
> ><br>
> > I've had time to work through the math in the original<br>
> function and I'm<br>
> > pretty sure it's possible to vectorize this without the huge<br>
> > prologue/epilogue.<br>
> ><br>
> > For the simple case where order = vector size = N (but it<br>
> should easily<br>
> > generalize to larger order), what I came up with is:<br>
> ><br>
> > initialize X, Y, M, C to vector of zeros<br>
> ><br>
> > for i=0 to N+order<br>
> > T = [x(i), Y(0:N-2)]<br>
> > Y = M + coeff * (Y - T)<br>
> > M = T<br>
> > X = [x(i), X(0:N-1)]<br>
> > C = C + X*Y<br>
> ><br>
> > I think something similar to this (assuming I didn't mess up any<br>
> > details) should give you the correlations in vector C. Did I miss<br>
> > anything?<br>
> ><br>
> > Cheers,<br>
> ><br>
> > Jean-Marc<br>
> ><br>
> ><br>
> > On 31/01/17 12:30 PM, Felicia Lim wrote:<br>
> > > Hi,<br>
> > ><br>
> > > Attached is a patch with arm neon optimizations for<br>
> > > silk_warped_autocorrelation_<wbr>FIX(). Please review.<br>
> > ><br>
> > > Thanks,<br>
> > > Felicia<br>
> > ><br>
> > ><br>
> > > ______________________________<wbr>_________________<br>
> > > opus mailing list<br>
</div></div>> > > <a href="mailto:opus@xiph.org">opus@xiph.org</a> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a>> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a><br>
<span class="">> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a>>><br>
> > > <a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/mailman/<wbr>listinfo/opus</a><br>
> <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a>><br>
> > <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a><br>
> <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a>>><br>
> > ><br>
> > ______________________________<wbr>_________________<br>
> > opus mailing list<br>
</span>> > <a href="mailto:opus@xiph.org">opus@xiph.org</a> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a>> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a><br>
<span class="">> <mailto:<a href="mailto:opus@xiph.org">opus@xiph.org</a>>><br>
> > <a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/mailman/<wbr>listinfo/opus</a><br>
> <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a>><br>
</span>> > <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a><br>
> <<a href="http://lists.xiph.org/mailman/listinfo/opus" rel="noreferrer" target="_blank">http://lists.xiph.org/<wbr>mailman/listinfo/opus</a>>><br>
> ><br>
> ><br>
><br>
><br>
</blockquote></div><br></div>