Speech-to-Text CTC + pyctcdecode#

Encoder model + CTC loss + pyctcdecode with KenLM

This tutorial is available as an IPython notebook at malaya-speech/example/stt-ctc-model-pyctcdecode.

This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.

This is an application of malaya-speech Pipeline, read more about malaya-speech Pipeline at malaya-speech/example/pipeline.

This interface deprecated, use HuggingFace interface instead.

[1]:
import malaya_speech
import numpy as np
from malaya_speech import Pipeline
`pyaudio` is not available, `malaya_speech.streaming.stream` is not able to use.
[2]:
import logging

logging.basicConfig(level=logging.INFO)
[3]:
import warnings
warnings.filterwarnings('default')

Install pyctcdecode#

From PYPI#

pip3 install pyctcdecode==0.1.0 pypi-kenlm==0.1.20210121

From source#

Check https://github.com/kensho-technologies/pyctcdecode how to build from source incase there is no available wheel for your operating system.

Building from source should only take a few minutes.

Benefit#

  1. pyctcdecode accurate than ctc-decoders for certain cases, but slower than pyctcdecode.

  2. pip install and done, no need to compile.

List available CTC model#

[4]:
malaya_speech.stt.ctc.available_transformer()
/home/husein/dev/malaya-speech/malaya_speech/stt/ctc.py:144: DeprecationWarning: `malaya.stt.ctc.available_transformer` is deprecated, use `malaya.stt.ctc.available_huggingface` instead
  warnings.warn(
INFO:malaya_speech.stt:for `malay-fleur102` language, tested on FLEURS102 `ms_my` test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `malay-malaya` language, tested on malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
INFO:malaya_speech.stt:for `singlish` language, tested on IMDA malaya-speech test set, https://github.com/huseinzol05/malaya-speech/tree/master/pretrained-model/prepare-stt
[4]:
Size (MB) Quantized Size (MB) malay-malaya Language
hubert-conformer-tiny 36.6 10.3 {'WER': 0.238714008166, 'CER': 0.060899814, 'W... [malay]
hubert-conformer 115 31.1 {'WER': 0.2387140081, 'CER': 0.06089981404, 'W... [malay]
hubert-conformer-large 392 100 {'WER': 0.2203140421, 'CER': 0.0549270416, 'WE... [malay]

Load CTC model#

def transformer(
    model: str = 'hubert-conformer',
    quantized: bool = False,
    **kwargs,
):
    """
    Load Encoder-CTC ASR model.

    Parameters
    ----------
    model : str, optional (default='hubert-conformer')
        Check available models at `malaya_speech.stt.ctc.available_transformer()`.
    quantized : bool, optional (default=False)
        if True, will load 8-bit quantized model.
        Quantized model not necessary faster, totally depends on the machine.

    Returns
    -------
    result : malaya_speech.model.wav2vec.Wav2Vec2_CTC class
    """
[3]:
model = malaya_speech.stt.ctc.transformer(model = 'hubert-conformer')

Load sample#

[4]:
ceramah, sr = malaya_speech.load('speech/khutbah/wadi-annuar.wav')
record1, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-36-06_294832.wav')
record2, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-40-56_929661.wav')
[5]:
import IPython.display as ipd

ipd.Audio(ceramah, rate = sr)
[5]:

As we can hear, the speaker speaks in kedahan dialects plus some arabic words, let see how good our model is.

[6]:
ipd.Audio(record1, rate = sr)
[6]:
[7]:
ipd.Audio(record2, rate = sr)
[7]:

As you can see, below is the output from beam decoder without language model,

['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah ma ini',
 'helo nama saya esin saya tak suka mandi ketak saya masak',
 'helo nama saya musin saya suka mandi saya mandi titiap hari']

Predict logits#

def predict_logits(self, inputs):
    """
    Predict logits from inputs.

    Parameters
    ----------
    input: List[np.array]
        List[np.array] or List[malaya_speech.model.frame.Frame].


    Returns
    -------
    result: List[np.array]
    """
[8]:
%%time

logits = model.predict_logits([ceramah, record1, record2])
CPU times: user 26.5 s, sys: 10.5 s, total: 36.9 s
Wall time: 19.8 s
[9]:
logits[0].shape
[9]:
(499, 39)

Load pyctcdecode#

I will use dump-combined for this example.

[13]:
from pyctcdecode import build_ctcdecoder
from malaya_speech.utils.char import CTC_VOCAB
import kenlm
[14]:
lm = malaya_speech.language_model.kenlm(model = 'dump-combined')
[15]:
kenlm_model = kenlm.Model(lm)
decoder = build_ctcdecoder(
    CTC_VOCAB + ['_'],
    kenlm_model,
    alpha=0.2,
    beta=1.0,
    ctc_token_idx=len(CTC_VOCAB)
)
[18]:
out = decoder.decode_beams(logits[0], prune_history=True)
d_lm, lm_state, timesteps, logit_score, lm_score = out[0]
d_lm
[18]:
'jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah ma ini'
[19]:
out = decoder.decode_beams(logits[1], prune_history=True)
d_lm, lm_state, timesteps, logit_score, lm_score = out[0]
d_lm
[19]:
'helo nama saya mesin saya tak suka mandi ketat saya masak'
[20]:
out = decoder.decode_beams(logits[2], prune_history=True)
d_lm, lm_state, timesteps, logit_score, lm_score = out[0]
d_lm
[20]:
'helo nama saya mesin saya suka mandi saya mandi setiap hari'