[opus] [EXTERNAL] Re: Submitting a patch that exposes VAD voiced/unvoiced signal type

Freshman, Peter Peter.Freshman at nuance.com
Mon Jul 10 13:13:36 UTC 2017


Hi Jean-Marc,

Touching base as it's been awhile since our last correspondence. Did you need any additional information from us?


Thanks,
Peter

________________________________
From: Freshman, Peter
Sent: Tuesday, June 20, 2017 1:08:19 PM
To: Jean-Marc Valin; Jean-Marc Valin; opus at xiph.org; Nadeau, Benoit
Subject: Re: [opus] [EXTERNAL] Re: Submitting a patch that exposes VAD voiced/unvoiced signal type


Hi Jean-Marc,

We're exposing the opus_internal_flags data structure so that we can access the value assigned to prevSignalType. Here's a snippet of our code:


    error = opus_encoder_get_internal_flags(vad->opus, &internalflags);

    if (error != OPUS_OK)

    {

        return OPUSVAD_OPUS_ERROR;

    }


    cur_signal_type = internalflags.prevSignalType;


    if ((vad->cur_state == STATE_NO_STATE) && (cur_signal_type == TYPE_UNVOICED || cur_signal_type == TYPE_VOICED) && (vad->prev_signal_type == TYPE_NO_VOICE_ACTIVITY)) {

    ...

Our library uses this information to apply end pointing on voice-based audio. For example, we work with many customers implementing speech-enabled TV set-top-box solutions. We offer a small library that does some start and end of speech detection on the audio so they can understand whether or not someone is actually speaking into the remote control.

Does this help make things more clear?



Thanks,
Peter


________________________________
From: Jean-Marc Valin <jmvalin at jmvalin.ca>
Sent: Friday, June 16, 2017 2:27:01 PM
To: Freshman, Peter; Jean-Marc Valin; opus at xiph.org; Nadeau, Benoit
Subject: Re: [opus] [EXTERNAL] Re: Submitting a patch that exposes VAD voiced/unvoiced signal type

Hi Peter,

Can you say a little bit more about what you're doing exactly with the
information you're exposing and how? unfortunately, I don't have a
concrete proposal in mind right now. That's in part because I don't
quite understand the use case, but also because it's really hard to
expose this kind of information in a way that both avoids breaking
application with new versions and doesn't prevent future improvements to
Opus.

Cheers,

        Jean-Marc

On 08/06/17 08:20 AM, Freshman, Peter wrote:
> Hi Jean-Marc,
>
> Thank you for the valuable feedback. You're correct in that we focused
> on enabling this just for SILK. Because our solutions are focused on
> voice, we did not explore doing the same in CELT mode, but we can
> certainly look into the details of analysis.c.
>
>
> Regarding the concern of exposing internals, do you have a specific
> proposal in mind?
>
>
> We've been sharing this patch with our customers over the last several
> months, and the preference obviously would be to have it in the public
> domain. We're interested in any opportunity to accelerate this.
>
>
> Thanks,
> Peter
>
> ------------------------------------------------------------------------
> *From:* Jean-Marc Valin <jmvalin at mozilla.com>
> *Sent:* Wednesday, June 7, 2017 2:46:52 AM
> *To:* Freshman, Peter; opus at xiph.org
> *Subject:* [EXTERNAL] Re: [opus] Submitting a patch that exposes VAD
> voiced/unvoiced signal type
>
> Hi Peter,
>
> There's two main issues with a patch like the one you're proposing.
> First, the data is only valid when SILK is being used and is essentially
> undefined in CELT mode. The second issue is that by exposing internals,
> it makes it impossible to improve these algorithms since it would break
> API compatibility. I'm not fundamentally against trying to expose some
> information, but there would have to be a way to address those two issues.
>
> On a slightly different topic, have you looked at the VAD probability
> that's computed in analysis.c (along with the speech/music probability)?
>
> Cheers,
>
>         Jean-Marc
>
>
>> I'm reaching out because we'd like to contribute back to the project
>> a patch that exposes the signal type of the audio packet when
>> encoding the PCM audio to OPUS. We've found the Opus VAD algorithm to
>> be exceptional in this regard and have written a library that
>> leverages this information for audio end-pointing. Attached is the
>> patch. Please let us know if you'd be willing to accept it, or if
>> you'd prefer we fork libopus or recommend some other option.
>
>
>
>
>
> _______________________________________________
> opus mailing list
> opus at xiph.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.xiph.org_mailman_listinfo_opus&d=DwICaQ&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=WPCM474fpTKKLKdP8t4H4wmfeR2M_bMLGQolTVoLb_c&m=rz_WLh1GC8llHGw1p_zxxyNuaBDfmeqbwjCOZRqu4cg&s=qCUgcbAdWcKbN8KlEp-WDAPekhb1nSBQiFWaur7Z6CU&e=
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.xiph.org/pipermail/opus/attachments/20170710/a987ef61/attachment.html>


More information about the opus mailing list