Speech-to-Text RNNT PyTorch#

Encoder model + RNNT loss using PyTorch

This tutorial is available as an IPython notebook at malaya-speech/example/stt-transducer-model-pt.

This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.

[1]:
import os

os.environ['CUDA_VISIBLE_DEVICES'] = ''
[2]:
import malaya_speech
import numpy as np
from malaya_speech import Pipeline
`pyaudio` is not available, `malaya_speech.streaming.pyaudio` is not able to use.
[3]:
import logging

logging.basicConfig(level=logging.INFO)

List available RNNT model#

[4]:
malaya_speech.stt.transducer.available_pt_transformer()
INFO:malaya_speech.stt:for `malay-fleur102` language, tested on FLEURS102 `ms_my` test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `malay-malaya` language, tested on malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `singlish` language, tested on IMDA malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `whisper-mixed` language, tested on semisupervised Whisper Large V2 test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
[4]:
Size (MB) malay-malaya malay-fleur102 Language singlish whisper-mixed
mesolitica/conformer-tiny 38.5 {'WER': 0.17341180814, 'CER': 0.05957485024} {'WER': 0.19524478979, 'CER': 0.0830808938} [malay] NaN NaN
mesolitica/conformer-base 121 {'WER': 0.122076123261, 'CER': 0.03879606324} {'WER': 0.1326737206665, 'CER': 0.05032914857} [malay] NaN NaN
mesolitica/conformer-medium 243 {'WER': 0.1054817492564, 'CER': 0.0313518992842} {'WER': 0.1172708897486, 'CER': 0.0431050488} [malay] NaN NaN
mesolitica/emformer-base 162 {'WER': 0.175762423786, 'CER': 0.06233919000537} {'WER': 0.18303839134, 'CER': 0.0773853362} [malay] NaN NaN
mesolitica/conformer-base-singlish 121 NaN NaN [singlish] {'WER': 0.06517537334361, 'CER': 0.03265430876} NaN
mesolitica/conformer-medium-mixed 243 {'WER': 0.111166517935, 'CER': 0.03410958328} {'WER': 0.108354748, 'CER': 0.037785722} [malay, singlish] {'WER': 0.091969755225, 'CER': 0.044627194623} NaN
mesolitica/conformer-medium-malay-whisper 243 {'WER': 0.092561502, 'CER': 0.0245421736} {'WER': 0.097128574, 'CER': 0.03392603} [malay, mixed] NaN {'WER': 0.1705298134, 'CER': 0.10580679153}
mesolitica/conformer-large-malay-whisper 413 {'WER': 0.10028492039, 'CER': 0.0310868406} {'WER': 0.09544850396, 'CER': 0.03258454692} [malay, mixed] NaN {'WER': 0.20429079189, 'CER': 0.12111372327}
[5]:
malaya_speech.stt.google_accuracy
[5]:
{'malay-malaya': {'WER': 0.16477548774, 'CER': 0.05973209121},
 'malay-fleur102': {'WER': 0.109588779, 'CER': 0.047891527},
 'singlish': {'WER': 0.4941349, 'CER': 0.3026296}}
[6]:
malaya_speech.stt.whisper_accuracy
[6]:
{'tiny': {'Size (MB)': 72.1,
  'malay-malaya': {'WER': 0.7897730947, 'CER': 0.341671582346},
  'malay-fleur102': {'WER': 0.640224185, 'CER': 0.2869274323},
  'singlish': {'WER': 0.4751720563, 'CER': 0.35132630877}},
 'base': {'Size (MB)': 139,
  'malay-malaya': {'WER': 0.5138481614, 'CER': 0.19487665487},
  'malay-fleur102': {'WER': 0.4268323797, 'CER': 0.1545261803},
  'singlish': {'WER': 0.5354453439, 'CER': 0.4287910359}},
 'small': {'Size (MB)': 461,
  'malay-malaya': {'WER': 0.2818371132, 'CER': 0.09588120693},
  'malay-fleur102': {'WER': 0.2436472703, 'CER': 0.0913692568},
  'singlish': {'WER': 0.5971608337, 'CER': 0.5003890601}},
 'medium': {'Size (MB)': 1400,
  'malay-malaya': {'WER': 0.18945585961, 'CER': 0.0658303076},
  'malay-fleur102': {'WER': 0.1647166507, 'CER': 0.065537127},
  'singlish': {'WER': 0.68563087121, 'CER': 0.601676254253}},
 'large-v2': {'Size (MB)': 2900,
  'malay-malaya': {'WER': 0.1585939185, 'CER': 0.054978161091},
  'malay-fleur102': {'WER': 0.127483122485, 'CER': 0.05648688907},
  'singlish': {'WER': 0.6174993839, 'CER': 0.54582068858}}}

You should be skeptical with google and whisper accuracies, test set been applied with malaya-speech postprocessing, this can cause higher WER and CER.

Load RNNT model#

def pt_transformer(
    model: str = 'mesolitica/conformer-base',
    **kwargs,
):
    """
    Load Encoder-Transducer ASR model using Pytorch.

    Parameters
    ----------
    model : str, optional (default='mesolitica/conformer-base')
        Check available models at `malaya_speech.stt.transducer.available_pt_transformer()`.

    Returns
    -------
    result : malaya_speech.torch_model.torchaudio.Conformer class
    """
[6]:
model = malaya_speech.stt.transducer.pt_transformer(model = 'mesolitica/conformer-base')
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-base/model.pt
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-base/malay-stt.model
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-base/malay-stats.json
[7]:
model_mixed = malaya_speech.stt.transducer.pt_transformer(model = 'mesolitica/conformer-medium-mixed')
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-medium-mixed/model.pt
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-medium-mixed/malay-stt.model
INFO:malaya_boilerplate.huggingface:downloading frozen mesolitica/conformer-medium-mixed/malay-stats.json

Load sample#

[8]:
ceramah, sr = malaya_speech.load('speech/khutbah/wadi-annuar.wav')
record1, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-36-06_294832.wav')
record2, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-40-56_929661.wav')
shafiqah_idayu, sr = malaya_speech.load('speech/example-speaker/shafiqah-idayu.wav')
mas_aisyah, sr = malaya_speech.load('speech/example-speaker/mas-aisyah.wav')
khalil, sr = malaya_speech.load('speech/example-speaker/khalil-nooh.wav')
singlish0, _ = malaya_speech.load('speech/singlish/singlish0.wav')
singlish1, _ = malaya_speech.load('speech/singlish/singlish1.wav')
[9]:
import IPython.display as ipd

ipd.Audio(ceramah, rate = sr)
[9]:

As we can hear, the speaker speaks in kedahan dialects plus some arabic words, let see how good our model is.

[10]:
ipd.Audio(record1, rate = sr)
[10]:
[11]:
ipd.Audio(record2, rate = sr)
[11]:
[12]:
ipd.Audio(shafiqah_idayu, rate = sr)
[12]:
[13]:
ipd.Audio(mas_aisyah, rate = sr)
[13]:
[14]:
ipd.Audio(khalil, rate = sr)
[14]:
[15]:
ipd.Audio(singlish0, rate = sr)
[15]:
[16]:
ipd.Audio(singlish1, rate = sr)
[16]:

Predict using beam decoder#

def beam_decoder(self, inputs, beam_width: int = 20):
    """
    Transcribe inputs using beam decoder.

    Parameters
    ----------
    inputs: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].
    beam_width: int, optional (default=20)
        beam size for beam decoder.

    Returns
    -------
    result: List[str]
    """
[19]:
%%time

model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil,
                   singlish0, singlish1])
CPU times: user 42.9 s, sys: 317 ms, total: 43.3 s
Wall time: 3.97 s
[19]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muadz bin jabal tadi ni allah maha ini',
 'helo nama saya pusing saya tak suka mandi ke tak saya masak',
 'hello nama saya husin saya suka mandi saya mandi titik hari',
 'nama saya syafiqa idayu',
 'sebut perkataan angka',
 'tolong sebut antikata',
 'nanti how day broway handsome okey',
 'minta toyol']
[20]:
%%time

model_mixed.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil,
                         singlish0, singlish1])
CPU times: user 42.9 s, sys: 616 ms, total: 43.5 s
Wall time: 3.77 s
[20]:
['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maaf ini',
 'hello nama saya husin saya tak suka mandi ke tak saya',
 'hello nama saya husin saya suka mandi saya mandi tiap hari',
 'nama saya syafiqah idayu',
 'sebut perkataan angka',
 'tolong sebut ertikata',
 'and then see how they roll it in film okay actually',
 'then you tech to your eyes']