Realtime ASR Mixed#

Let say you want to transcribe realtime recording / input, malaya-speech able to do that, plus able to understand local english and malay!

This tutorial is available as an IPython notebook at malaya-speech/example/realtime-asr-mixed.

This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.

This is an application of malaya-speech Pipeline, read more about malaya-speech Pipeline at malaya-speech/example/pipeline.

[1]:
import malaya_speech
from malaya_speech import Pipeline

Load VAD model#

We are going to use WebRTC VAD model, read more about VAD at https://malaya-speech.readthedocs.io/en/latest/load-vad.html

[2]:
vad_model = malaya_speech.vad.webrtc()

Recording interface#

So, to start recording audio including realtime VAD and Classification, we need to use malaya_speech.streaming.record. We use pyaudio library as the backend.

def record(
    vad,
    asr_model = None,
    classification_model = None,
    device = None,
    input_rate: int = 16000,
    sample_rate: int = 16000,
    blocks_per_second: int = 50,
    padding_ms: int = 300,
    ratio: float = 0.75,
    min_length: float = 0.1,
    filename: str = None,
    spinner: bool = False,
):
    """
    Record an audio using pyaudio library. This record interface required a VAD model.

    Parameters
    ----------
    vad: object
        vad model / pipeline.
    asr_model: object
        ASR model / pipeline, will transcribe each subsamples realtime.
    classification_model: object
        classification pipeline, will classify each subsamples realtime.
    device: None
        `device` parameter for pyaudio, check available devices from `sounddevice.query_devices()`.
    input_rate: int, optional (default = 16000)
        sample rate from input device, this will auto resampling.
    sample_rate: int, optional (default = 16000)
        output sample rate.
    blocks_per_second: int, optional (default = 50)
        size of frame returned from pyaudio, frame size = sample rate / (blocks_per_second / 2).
        50 is good for WebRTC, 30 or less is good for Malaya Speech VAD.
    padding_ms: int, optional (default = 300)
        size of queue to store frames, size = padding_ms // (1000 * blocks_per_second // sample_rate)
    ratio: float, optional (default = 0.75)
        if 75% of the queue is positive, assumed it is a voice activity.
    min_length: float, optional (default=0.1)
        minimum length (s) to accept a subsample.
    filename: str, optional (default=None)
        if None, will auto generate name based on timestamp.
    spinner: bool, optional (default=False)
        if True, will use spinner object from halo library.


    Returns
    -------
    result : [filename, samples]
    """

pyaudio will returned int16 bytes, so we need to change to numpy array, normalize it to -1 and +1 floating point.

Check available devices#

[3]:
import sounddevice

sounddevice.query_devices()
[3]:
> 0 External Microphone, Core Audio (1 in, 0 out)
< 1 External Headphones, Core Audio (0 in, 2 out)
  2 MacBook Pro Microphone, Core Audio (1 in, 0 out)
  3 MacBook Pro Speakers, Core Audio (0 in, 2 out)
  4 JustStream Audio Driver, Core Audio (2 in, 2 out)

By default it will use 0 index.

Load ASR model#

[4]:
model = malaya_speech.stt.deep_transducer(model = 'conformer-mixed')

ASR Pipeline#

Because pyaudio will returned int16 bytes, so we need to change to numpy array then normalize to float, feel free to add speech enhancement or any function, but in this example, I just keep it simple.

[5]:
p_asr = Pipeline()
pipeline_asr = (
    p_asr.map(malaya_speech.astype.to_ndarray)
    .map(malaya_speech.astype.int_to_float)
    .map(lambda x: model(x), name = 'speech-to-text')
)
p_asr.visualize()
[5]:
_images/realtime-asr-mixed_15_0.png

You need to make sure the last output should named as ``speech-to-text`` or else the realtime engine will throw an error.

Start Recording#

Again, once you start to run the code below, it will straight away recording your voice.

If you run in jupyter notebook, press button stop up there to stop recording, if in terminal, press CTRL + c.

[7]:
file, samples = malaya_speech.streaming.record(vad = vad_model, asr_model = p_asr, spinner = False,
                                              filename = 'realtime-asr-mixed.wav')
file
Listening (ctrl-C to stop recording) ...

Sample 0 2021-06-29 17:18:11.304366: hello
Sample 1 2021-06-29 17:18:13.970897: nama saya musim
Sample 2 2021-06-29 17:18:16.227266: i like to eat chicken so much
Sample 3 2021-06-29 17:18:18.309072: uh korang sc
Sample 4 2021-06-29 17:18:20.032183: semua suka makan ayam tak
Sample 5 2021-06-29 17:18:23.831369: uni funny lah
Sample 6 2021-06-29 17:18:29.609492: makasih
Sample 7 2021-06-29 17:18:31.709068: thank you so much
saved audio to realtime-asr-mixed.wav
[7]:
'realtime-asr-mixed.wav'

the wav file can get at malaya-speech/speech/record.

Actually it is pretty nice. As you can see, it able to transcribe realtime, you can try it by yourself.

[8]:
import IPython.display as ipd

ipd.Audio(file)
[8]:
[9]:
type(samples[4][0]), samples[4][1]
[9]:
(bytearray, 'semua suka makan ayam tak')
[10]:
y = malaya_speech.utils.astype.to_ndarray(samples[4][0])
ipd.Audio(y, rate = 16000)
[10]: