Hello,<br><br>I am using the speex AEC in a real time application.<br>I have experienced that when the mic and the speakers tracks are on sync or with a small delay the AEC works very well. I understood that when they are out of sync the AEC cannot works, so what the "user" should do is to focus in order to have the tracks on sync.<br>
<br>Since I am working in an environment where it is not rare to have tracks out of sync, I was wondering if you can help me to find a reliable solution.<br><br>Right now I am working on two recorded audio track where the mic and the speakers are both in sync or out of sync in different parts of the files.<br>
<br>I am trying to perform the synchronization, using the cross-correlation, on each frame before to pass on the echo_cancellation.<br>In particular the parameters of cancellation will be the frame taken from the mic file, and the frame from the speaker files which better cross-correlated with the mic frame.<br>
The result is not very different from the output without synchronization.<br><br>So my questions are:<br> - Do you think that an approach like this could, in principle, give more echo cancellation? Let's forget the performance, just for now. <br>
- Do you have any idea how to obtain faster re-adapting, if the tracks go out of sync?<br> - I read the papers cited in the code, but I cannot understand completely how the algorithm works and adapts; do you have any available references to suggest in order to have a better understanding and to try to obtain an echo cancellation in these bad situations?<br>
<br>Thanks for your help<br><br>Marco Pierleoni.<br><br>