Speech-to-Text RNNT Malay + Singlish#

Encoder model + RNNT loss for Malay + Singlish

This tutorial is available as an IPython notebook at malaya-speech/example/stt-transducer-model-2mixed.

This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.

This is an application of malaya-speech Pipeline, read more about malaya-speech Pipeline at malaya-speech/example/pipeline.

[1]:
import os

os.environ['CUDA_VISIBLE_DEVICES'] = ''
[2]:
import malaya_speech
import numpy as np
from malaya_speech import Pipeline
`pyaudio` is not available, `malaya_speech.streaming.stream` is not able to use.
[3]:
import logging

logging.basicConfig(level=logging.INFO)

List available RNNT model#

[4]:
malaya_speech.stt.transducer.available_transformer()
INFO:malaya_speech.stt:for `malay-fleur102` language, tested on FLEURS102 `ms_my` test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `malay-malaya` language, tested on malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `singlish` language, tested on IMDA malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
[4]:
Size (MB) Quantized Size (MB) malay-malaya malay-fleur102 Language singlish
tiny-conformer 24.4 9.14 {'WER': 0.2128108, 'CER': 0.08136871, 'WER-LM'... {'WER': 0.2682816, 'CER': 0.13052725, 'WER-LM'... [malay] NaN
small-conformer 49.2 18.1 {'WER': 0.19853302, 'CER': 0.07449528, 'WER-LM... {'WER': 0.23412149, 'CER': 0.1138314813, 'WER-... [malay] NaN
conformer 125 37.1 {'WER': 0.16340855635999124, 'CER': 0.05897205... {'WER': 0.20090442596, 'CER': 0.09616901, 'WER... [malay] NaN
large-conformer 404 107 {'WER': 0.1566839, 'CER': 0.0619715, 'WER-LM':... {'WER': 0.1711028238, 'CER': 0.077953559, 'WER... [malay] NaN
conformer-stack-2mixed 130 38.5 {'WER': 0.1889883954, 'CER': 0.0726845531, 'WE... {'WER': 0.244836948, 'CER': 0.117409327, 'WER-... [malay, singlish] {'WER': 0.08535878149, 'CER': 0.0452357273822,...
small-conformer-singlish 49.2 18.1 NaN NaN [singlish] {'WER': 0.087831, 'CER': 0.0456859, 'WER-LM': ...
conformer-singlish 125 37.1 NaN NaN [singlish] {'WER': 0.07779246, 'CER': 0.0403616, 'WER-LM'...
large-conformer-singlish 404 107 NaN NaN [singlish] {'WER': 0.07014733, 'CER': 0.03587201, 'WER-LM...

Load RNNT model#

def transformer(
    model: str = 'conformer',
    quantized: bool = False,
    **kwargs,
):
    """
    Load Encoder-Transducer ASR model.

    Parameters
    ----------
    model : str, optional (default='conformer')
        Check available models at `malaya_speech.stt.transducer.available_transformer()`.
    quantized : bool, optional (default=False)
        if True, will load 8-bit quantized model.
        Quantized model not necessary faster, totally depends on the machine.

    Returns
    -------
    result : malaya_speech.model.transducer.Transducer class
    """
[5]:
model = malaya_speech.stt.transducer.transformer(model = 'conformer-stack-2mixed')
2023-02-01 11:47:45.718834: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-02-01 11:47:45.723521: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2023-02-01 11:47:45.723540: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: husein-MS-7D31
2023-02-01 11:47:45.723543: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: husein-MS-7D31
2023-02-01 11:47:45.723621: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
2023-02-01 11:47:45.723640: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 470.161.3

Load Quantized deep model#

To load 8-bit quantized model, simply pass quantized = True, default is False.

We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.

[21]:
quantized_model = malaya_speech.stt.transducer.transformer(model = 'conformer-stack-2mixed', quantized = True)

Load sample#

[6]:
ceramah, sr = malaya_speech.load('speech/khutbah/wadi-annuar.wav')
record1, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-36-06_294832.wav')
record2, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-40-56_929661.wav')
singlish0, sr = malaya_speech.load('speech/singlish/singlish0.wav')
singlish1, sr = malaya_speech.load('speech/singlish/singlish1.wav')
singlish2, sr = malaya_speech.load('speech/singlish/singlish2.wav')
[7]:
import IPython.display as ipd

ipd.Audio(ceramah, rate = sr)
[7]:
[8]:
ipd.Audio(record1, rate = sr)
[8]:
[9]:
ipd.Audio(record2, rate = sr)
[9]:
[10]:
ipd.Audio(singlish0, rate = sr)
[10]:
[11]:
ipd.Audio(singlish1, rate = sr)
[11]:
[12]:
ipd.Audio(singlish2, rate = sr)
[12]:

Predict using greedy decoder#

def greedy_decoder(self, inputs):
    """
    Transcribe inputs using greedy decoder.

    Parameters
    ----------
    inputs: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].

    Returns
    -------
    result: List[str]
    """
[7]:
%%time

model.greedy_decoder([ceramah, record1, record2, singlish0, singlish1, singlish2])
CPU times: user 10.9 s, sys: 3.22 s, total: 14.2 s
Wall time: 9.24 s
[7]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allahu',
 'helo nama saya mesin saya tak suka mandi ke tak saya masak',
 'helo nama saya husin saya suka mandi saya mandi titik hari',
 'and then see how they roll it in film okay actually',
 'then you talk to your eyes',
 'surprise in ma']
[13]:
%%time

quantized_model.greedy_decoder([ceramah, record1, record2, singlish0, singlish1, singlish2])
CPU times: user 11 s, sys: 3.28 s, total: 14.3 s
Wall time: 9.53 s
[13]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah mah',
 'hello nama saya usin saya tak suka mandi ketat saya masam',
 'hello nama saya husin saya suka mandi saya mandi titik hari',
 'and then see how they roll it in film okay actually',
 'then you tell to your eyes',
 'seven seven in mall']

Predict using beam decoder#

def beam_decoder(self, inputs, beam_width: int = 5,
                 temperature: float = 0.0,
                 score_norm: bool = True):
    """
    Transcribe inputs using beam decoder.

    Parameters
    ----------
    inputs: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].
    beam_width: int, optional (default=5)
        beam size for beam decoder.
    temperature: float, optional (default=0.0)
        apply temperature function for logits, can help for certain case,
        logits += -np.log(-np.log(uniform_noise_shape_logits)) * temperature
    score_norm: bool, optional (default=True)
        descending sort beam based on score / length of decoded.

    Returns
    -------
    result: List[str]
    """
[15]:
%%time

model.beam_decoder([ceramah, record1, record2, singlish0, singlish1, singlish2], beam_width = 5)
CPU times: user 22.3 s, sys: 3.5 s, total: 25.8 s
Wall time: 13.8 s
[15]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah mah',
 'helo nama saya usin saya tak suka mandi ketat saya masam',
 'helo nama saya husin saya suka mandi saya mandi titik hari',
 'and then see how they roll it in film okay actually',
 'then you tell to your eyes',
 'some person in mao']
[17]:
%%time

quantized_model.beam_decoder([ceramah, record1, record2, singlish0, singlish1, singlish2], beam_width = 5)
CPU times: user 21.2 s, sys: 3.14 s, total: 24.3 s
Wall time: 12.6 s
[17]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah mak',
 'helo nama saya husin saya tak suka mandi ketat saya masam',
 'helo nama saya husin saya suka mandi saya mandi titik hari',
 'and then see how they roll it in film okay actually',
 'then you tell to your eyes',
 'some person in mao']

RNNT model beam decoder not able to utilise batch programming, if feed a batch, it will process one by one.

Predict alignment#

We want to know when the speakers speak certain words, so we can use predict_timestamp,

def predict_alignment(self, input, combined = True):
    """
    Transcribe input and get timestamp, only support greedy decoder.

    Parameters
    ----------
    input: np.array
        np.array or malaya_speech.model.frame.Frame.
    combined: bool, optional (default=True)
        If True, will combined subwords to become a word.

    Returns
    -------
    result: List[Dict[text, start, end]]
    """
[18]:
%%time

model.predict_alignment(singlish0)
CPU times: user 6.09 s, sys: 2.16 s, total: 8.25 s
Wall time: 7.3 s
[18]:
[{'text': 'and', 'start': 0.2, 'end': 0.21},
 {'text': 'then', 'start': 0.36, 'end': 0.45},
 {'text': 'see', 'start': 0.6, 'end': 0.77},
 {'text': 'how', 'start': 0.88, 'end': 1.09},
 {'text': 'they', 'start': 1.36, 'end': 1.49},
 {'text': 'roll', 'start': 1.92, 'end': 2.09},
 {'text': 'it', 'start': 2.2, 'end': 2.21},
 {'text': 'in', 'start': 2.4, 'end': 2.41},
 {'text': 'film', 'start': 2.6, 'end': 2.85},
 {'text': 'okay', 'start': 3.68, 'end': 3.85},
 {'text': 'actually', 'start': 3.92, 'end': 4.21}]
[19]:
%%time

model.predict_alignment(singlish0, combined = False)
CPU times: user 1.07 s, sys: 165 ms, total: 1.24 s
Wall time: 284 ms
[19]:
[{'text': 'and', 'start': 0.2, 'end': 0.21},
 {'text': ' ', 'start': 0.28, 'end': 0.29},
 {'text': 'the', 'start': 0.36, 'end': 0.37},
 {'text': 'n_', 'start': 0.44, 'end': 0.45},
 {'text': 'se', 'start': 0.6, 'end': 0.61},
 {'text': 'e_', 'start': 0.76, 'end': 0.77},
 {'text': 'ho', 'start': 0.88, 'end': 0.89},
 {'text': 'w_', 'start': 1.08, 'end': 1.09},
 {'text': 'the', 'start': 1.36, 'end': 1.37},
 {'text': 'y_', 'start': 1.48, 'end': 1.49},
 {'text': 'ro', 'start': 1.92, 'end': 1.93},
 {'text': 'll_', 'start': 2.08, 'end': 2.09},
 {'text': 'it_', 'start': 2.2, 'end': 2.21},
 {'text': 'in_', 'start': 2.4, 'end': 2.41},
 {'text': 'fi', 'start': 2.6, 'end': 2.61},
 {'text': 'l', 'start': 2.76, 'end': 2.77},
 {'text': 'm_', 'start': 2.84, 'end': 2.85},
 {'text': 'oka', 'start': 3.68, 'end': 3.69},
 {'text': 'y_', 'start': 3.84, 'end': 3.85},
 {'text': 'ac', 'start': 3.92, 'end': 3.93},
 {'text': 'tu', 'start': 4.04, 'end': 4.05},
 {'text': 'all', 'start': 4.16, 'end': 4.17},
 {'text': 'y', 'start': 4.2, 'end': 4.21}]