Speech-to-Text RNNT#

Encoder model + RNNT loss

This tutorial is available as an IPython notebook at malaya-speech/example/stt-transducer-model.

This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.

[1]:
import malaya_speech
import numpy as np
from malaya_speech import Pipeline

List available RNNT model#

[2]:
malaya_speech.stt.available_transducer()
[2]:
Size (MB) Quantized Size (MB) WER CER WER-LM CER-LM Language
tiny-conformer 24.4 9.14 0.212811 0.081369 0.199683 0.077004 [malay]
small-conformer 49.2 18.1 0.198533 0.074495 0.185361 0.071143 [malay]
conformer 125 37.1 0.163602 0.058744 0.156182 0.05719 [malay]
large-conformer 404 107 0.156684 0.061971 0.148622 0.05901 [malay]
conformer-stack-2mixed 130 38.5 0.103608 0.050069 0.102911 0.050201 [malay, singlish]
conformer-stack-3mixed 130 38.5 0.234768 0.133944 0.229241 0.130702 [malay, singlish, mandarin]
small-conformer-singlish 49.2 18.1 0.087831 0.045686 0.087333 0.045317 [singlish]
conformer-singlish 125 37.1 0.077792 0.040362 0.077186 0.03987 [singlish]
large-conformer-singlish 404 107 0.070147 0.035872 0.069812 0.035723 [singlish]

Lower is better. Mixed models tested on different dataset.

Google Speech-to-Text accuracy#

We tested on the same malay dataset to compare malaya-speech models and Google Speech-to-Text, check the notebook at benchmark-google-speech-malay-dataset.ipynb.

[3]:
malaya_speech.stt.google_accuracy
[3]:
{'malay': {'WER': 0.164775, 'CER': 0.059732},
 'singlish': {'WER': 0.4941349, 'CER': 0.3026296}}

Again, even ``large-conformer`` beat google speech-to-text accuracy, we really need to be skeptical with the score, the test set and postprocessing might favoured for malaya-speech.

Load RNNT model#

def deep_transducer(
    model: str = 'conformer', quantized: bool = False, **kwargs
):
    """
    Load Encoder-Transducer ASR model.

    Parameters
    ----------
    model : str, optional (default='conformer')
        Model architecture supported. Allowed values:

        * ``'tiny-conformer'`` - TINY size Google Conformer.
        * ``'small-conformer'`` - SMALL size Google Conformer.
        * ``'conformer'`` - BASE size Google Conformer.
        * ``'large-conformer'`` - LARGE size Google Conformer.
        * ``'conformer-stack-2mixed'`` - BASE size Stacked Google Conformer for (Malay + Singlish) languages.
        * ``'conformer-stack-3mixed'`` - BASE size Stacked Google Conformer for (Malay + Singlish + Mandarin) languages.
        * ``'small-conformer-singlish'`` - SMALL size Google Conformer for singlish language.
        * ``'conformer-singlish'`` - BASE size Google Conformer for singlish language.
        * ``'large-conformer-singlish'`` - LARGE size Google Conformer for singlish language.

    quantized : bool, optional (default=False)
        if True, will load 8-bit quantized model.
        Quantized model not necessary faster, totally depends on the machine.

    Returns
    -------
    result : malaya_speech.model.tf.Transducer class
    """
[30]:
small_model = malaya_speech.stt.deep_transducer(model = 'small-conformer')
model = malaya_speech.stt.deep_transducer(model = 'conformer')

Load Quantized deep model#

To load 8-bit quantized model, simply pass quantized = True, default is False.

We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.

[6]:
quantized_small_model = malaya_speech.stt.deep_transducer(model = 'small-conformer', quantized = True)
quantized_model = malaya_speech.stt.deep_transducer(model = 'conformer', quantized = True)
WARNING:root:Load quantized model will cause accuracy drop.
WARNING:root:Load quantized model will cause accuracy drop.

Load sample#

[7]:
ceramah, sr = malaya_speech.load('speech/khutbah/wadi-annuar.wav')
record1, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-36-06_294832.wav')
record2, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-40-56_929661.wav')
shafiqah_idayu, sr = malaya_speech.load('speech/example-speaker/shafiqah-idayu.wav')
mas_aisyah, sr = malaya_speech.load('speech/example-speaker/mas-aisyah.wav')
khalil, sr = malaya_speech.load('speech/example-speaker/khalil-nooh.wav')
[8]:
import IPython.display as ipd

ipd.Audio(ceramah, rate = sr)
[8]:

As we can hear, the speaker speaks in kedahan dialects plus some arabic words, let see how good our model is.

[9]:
ipd.Audio(record1, rate = sr)
[9]:
[10]:
ipd.Audio(record2, rate = sr)
[10]:
[11]:
ipd.Audio(shafiqah_idayu, rate = sr)
[11]:
[12]:
ipd.Audio(mas_aisyah, rate = sr)
[12]:
[13]:
ipd.Audio(khalil, rate = sr)
[13]:

Predict using greedy decoder#

def greedy_decoder(self, inputs):
    """
    Transcribe inputs using greedy decoder.

    Parameters
    ----------
    inputs: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].

    Returns
    -------
    result: List[str]
    """
[14]:
%%time

small_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])
CPU times: user 7.09 s, sys: 1.8 s, total: 8.88 s
Wall time: 6.05 s
[14]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',
 'helo nama saya husin saya tak suka mandi ketat saya masak',
 'helo nama saya husin saya suka mandi saya mandi tetek hari',
 'nama saya syafiqah hidayah',
 'sebut perkataan uncle',
 'tolong sebut anti kata']
[15]:
%%time

model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])
CPU times: user 11.3 s, sys: 3.47 s, total: 14.8 s
Wall time: 11.2 s
[15]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',
 'helo nama saya send saya tak suka mandi ke tak saya masam',
 'helo nama saya husin saya suka mandi saya mandi setiap hari',
 'nama saya syafiqah idayu',
 'sebut perkataan angka',
 'tolong sebut antika']
[16]:
%%time

quantized_small_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])
CPU times: user 6.58 s, sys: 1.34 s, total: 7.92 s
Wall time: 5.24 s
[16]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',
 'helo nama saya husin saya tak suka mandi ketat saya masak',
 'helo nama saya husin saya suka mandi saya mandi tetek hari',
 'nama saya syafiqah hidayah',
 'sebut perkataan uncle',
 'tolong sebut anti kata']
[17]:
%%time

quantized_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])
CPU times: user 10.8 s, sys: 3.02 s, total: 13.8 s
Wall time: 8.91 s
[17]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',
 'helo nama saya send saya tak suka mandi ke tak saya masam',
 'helo nama saya pusing saya suka mandi saya mandi setiap hari',
 'nama saya syafiqah idayu',
 'sebut perkataan angka',
 'tolong sebut antika']

Predict using beam decoder#

def beam_decoder(self, inputs, beam_width: int = 5,
                 temperature: float = 0.0,
                 score_norm: bool = True):
    """
    Transcribe inputs using beam decoder.

    Parameters
    ----------
    inputs: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].
    beam_width: int, optional (default=5)
        beam size for beam decoder.
    temperature: float, optional (default=0.0)
        apply temperature function for logits, can help for certain case,
        logits += -np.log(-np.log(uniform_noise_shape_logits)) * temperature
    score_norm: bool, optional (default=True)
        descending sort beam based on score / length of decoded.

    Returns
    -------
    result: List[str]
    """
[18]:
%%time

small_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)
CPU times: user 11.2 s, sys: 1.97 s, total: 13.2 s
Wall time: 8.14 s
[18]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',
 'helo nama saya pusing saya tak suka mandi ketat saya masak',
 'helo nama saya husin saya suka mandi saya mandi tetek hari',
 'nama saya syafiqah hidayah',
 'sebut perkataan uncle',
 'tolong sebut anti kata']
[19]:
%%time

model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)
CPU times: user 21.3 s, sys: 3.23 s, total: 24.6 s
Wall time: 13.6 s
[19]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',
 'helo nama saya pusing saya tak suka mandi ke tak saya masam',
 'helo nama saya husin saya suka mandi saya mandi tiap tiap hari',
 'nama saya syafiqah idayu',
 'sebut perkataan angka',
 'tolong sebut antika']
[20]:
%%time

quantized_small_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)
CPU times: user 10.6 s, sys: 1.71 s, total: 12.3 s
Wall time: 7.53 s
[20]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',
 'helo nama saya pusing saya tak suka mandi ketat saya masak',
 'helo nama saya husin saya suka mandi saya mandi tetek hari',
 'nama saya syafiqah hidayah',
 'sebut perkataan uncle',
 'tolong sebut anti kata']
[22]:
%%time

quantized_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)
CPU times: user 16.8 s, sys: 1.67 s, total: 18.5 s
Wall time: 6.45 s
[22]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',
 'helo nama saya pusing saya tak suka mandi ke tak saya masam',
 'helo nama saya pusing saya suka mandi saya mandi tiap tiap hari',
 'nama saya syafiqah idayu',
 'sebut perkataan angka',
 'tolong sebut antika']

RNNT model beam decoder not able to utilise batch processing, if feed a batch, it will process one by one.

Predict alignment#

We want to know when the speakers speak certain words, so we can use predict_timestamp,

def predict_alignment(self, input, combined = True):
    """
    Transcribe input and get timestamp, only support greedy decoder.

    Parameters
    ----------
    input: np.array
        np.array or malaya_speech.model.frame.Frame.
    combined: bool, optional (default=True)
        If True, will combined subwords to become a word.

    Returns
    -------
    result: List[Dict[text, start, end]]
    """
[23]:
%%time

small_model.predict_alignment(shafiqah_idayu)
CPU times: user 3.7 s, sys: 784 ms, total: 4.48 s
Wall time: 4.11 s
[23]:
[{'text': 'nama', 'start': 0.28, 'end': 0.57},
 {'text': 'saya', 'start': 0.68, 'end': 0.97},
 {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},
 {'text': 'idri', 'start': 1.8, 'end': 2.01}]
[24]:
%%time

small_model.predict_alignment(shafiqah_idayu, combined = False)
CPU times: user 405 ms, sys: 84.8 ms, total: 489 ms
Wall time: 128 ms
[24]:
[{'text': 'nam', 'start': 0.28, 'end': 0.29},
 {'text': 'a_', 'start': 0.56, 'end': 0.57},
 {'text': 'say', 'start': 0.68, 'end': 0.69},
 {'text': 'a_', 'start': 0.96, 'end': 0.97},
 {'text': 'sya', 'start': 1.28, 'end': 1.29},
 {'text': 'fi', 'start': 1.44, 'end': 1.45},
 {'text': 'q', 'start': 1.52, 'end': 1.53},
 {'text': 'ah_', 'start': 1.68, 'end': 1.69},
 {'text': 'id', 'start': 1.8, 'end': 1.81},
 {'text': 'ri', 'start': 2.0, 'end': 2.01}]
[25]:
%%time

small_model.predict_alignment(ceramah)
CPU times: user 1.25 s, sys: 324 ms, total: 1.58 s
Wall time: 348 ms
[25]:
[{'text': 'jadi', 'start': 0.36, 'end': 0.53},
 {'text': 'dalam', 'start': 0.6, 'end': 0.73},
 {'text': 'perjalanan', 'start': 0.84, 'end': 1.33},
 {'text': 'ini', 'start': 1.4, 'end': 1.41},
 {'text': 'dunia', 'start': 2.44, 'end': 2.65},
 {'text': 'yang', 'start': 2.76, 'end': 2.81},
 {'text': 'susah', 'start': 2.88, 'end': 3.13},
 {'text': 'ini', 'start': 3.24, 'end': 3.25},
 {'text': 'ketika', 'start': 5.64, 'end': 5.85},
 {'text': 'nabi', 'start': 6.12, 'end': 6.37},
 {'text': 'mengajar', 'start': 6.44, 'end': 6.81},
 {'text': 'muaz', 'start': 6.96, 'end': 7.21},
 {'text': 'bin', 'start': 7.28, 'end': 7.29},
 {'text': 'jabal', 'start': 7.44, 'end': 7.73},
 {'text': 'tadi', 'start': 7.84, 'end': 8.05},
 {'text': 'ni', 'start': 8.12, 'end': 8.13},
 {'text': 'allah', 'start': 8.52, 'end': 8.69},
 {'text': 'maha', 'start': 8.8, 'end': 9.01},
 {'text': 'ini', 'start': 9.4, 'end': 9.41}]
[26]:
%%time

model.predict_alignment(shafiqah_idayu)
CPU times: user 5.89 s, sys: 2.04 s, total: 7.93 s
Wall time: 7.37 s
[26]:
[{'text': 'nama', 'start': 0.28, 'end': 0.57},
 {'text': 'saya', 'start': 0.64, 'end': 0.97},
 {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},
 {'text': 'idayu', 'start': 1.8, 'end': 2.05}]
[27]:
%%time

model.predict_alignment(ceramah)
CPU times: user 2.35 s, sys: 515 ms, total: 2.87 s
Wall time: 593 ms
[27]:
[{'text': 'jadi', 'start': 0.36, 'end': 0.53},
 {'text': 'dalam', 'start': 0.6, 'end': 0.73},
 {'text': 'perjalanan', 'start': 0.8, 'end': 1.29},
 {'text': 'ini', 'start': 1.4, 'end': 1.41},
 {'text': 'dunia', 'start': 2.44, 'end': 2.65},
 {'text': 'yang', 'start': 2.72, 'end': 2.81},
 {'text': 'susah', 'start': 2.88, 'end': 3.13},
 {'text': 'ini', 'start': 3.24, 'end': 3.25},
 {'text': 'ketika', 'start': 5.64, 'end': 5.85},
 {'text': 'nabi', 'start': 6.12, 'end': 6.37},
 {'text': 'mengajar', 'start': 6.44, 'end': 6.81},
 {'text': 'muaz', 'start': 6.96, 'end': 7.21},
 {'text': 'bin', 'start': 7.28, 'end': 7.29},
 {'text': 'jabal', 'start': 7.44, 'end': 7.73},
 {'text': 'tadi', 'start': 7.84, 'end': 8.05},
 {'text': 'ni', 'start': 8.12, 'end': 8.13},
 {'text': 'alah', 'start': 8.52, 'end': 8.69},
 {'text': 'maaf', 'start': 8.8, 'end': 9.01}]
[28]:
%%time

quantized_small_model.predict_alignment(shafiqah_idayu)
CPU times: user 3.82 s, sys: 743 ms, total: 4.56 s
Wall time: 4.19 s
[28]:
[{'text': 'nama', 'start': 0.28, 'end': 0.57},
 {'text': 'saya', 'start': 0.68, 'end': 0.97},
 {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},
 {'text': 'idri', 'start': 1.8, 'end': 2.01}]
[29]:
%%time

quantized_model.predict_alignment(shafiqah_idayu)
CPU times: user 5.59 s, sys: 1.85 s, total: 7.44 s
Wall time: 6.8 s
[29]:
[{'text': 'nama', 'start': 0.28, 'end': 0.57},
 {'text': 'saya', 'start': 0.64, 'end': 0.97},
 {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},
 {'text': 'id', 'start': 1.8, 'end': 1.81}]