{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Speech-to-Text RNNT" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Encoder model + RNNT loss" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "This tutorial is available as an IPython notebook at [malaya-speech/example/stt-transducer-model](https://github.com/huseinzol05/malaya-speech/tree/master/example/stt-transducer-model).\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", "\n", "This module is not language independent, so it not save to use on different languages. Pretrained models trained on hyperlocal languages.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import malaya_speech\n", "import numpy as np\n", "from malaya_speech import Pipeline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### List available RNNT model" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
Size (MB)Quantized Size (MB)WERCERWER-LMCER-LMLanguage
tiny-conformer24.49.140.2128110.0813690.1996830.077004[malay]
small-conformer49.218.10.1985330.0744950.1853610.071143[malay]
conformer12537.10.1636020.0587440.1561820.05719[malay]
large-conformer4041070.1566840.0619710.1486220.05901[malay]
conformer-stack-2mixed13038.50.1036080.0500690.1029110.050201[malay, singlish]
conformer-stack-3mixed13038.50.2347680.1339440.2292410.130702[malay, singlish, mandarin]
small-conformer-singlish49.218.10.0878310.0456860.0873330.045317[singlish]
conformer-singlish12537.10.0777920.0403620.0771860.03987[singlish]
large-conformer-singlish4041070.0701470.0358720.0698120.035723[singlish]
\n", "
" ], "text/plain": [ " Size (MB) Quantized Size (MB) WER CER \\\n", "tiny-conformer 24.4 9.14 0.212811 0.081369 \n", "small-conformer 49.2 18.1 0.198533 0.074495 \n", "conformer 125 37.1 0.163602 0.058744 \n", "large-conformer 404 107 0.156684 0.061971 \n", "conformer-stack-2mixed 130 38.5 0.103608 0.050069 \n", "conformer-stack-3mixed 130 38.5 0.234768 0.133944 \n", "small-conformer-singlish 49.2 18.1 0.087831 0.045686 \n", "conformer-singlish 125 37.1 0.077792 0.040362 \n", "large-conformer-singlish 404 107 0.070147 0.035872 \n", "\n", " WER-LM CER-LM Language \n", "tiny-conformer 0.199683 0.077004 [malay] \n", "small-conformer 0.185361 0.071143 [malay] \n", "conformer 0.156182 0.05719 [malay] \n", "large-conformer 0.148622 0.05901 [malay] \n", "conformer-stack-2mixed 0.102911 0.050201 [malay, singlish] \n", "conformer-stack-3mixed 0.229241 0.130702 [malay, singlish, mandarin] \n", "small-conformer-singlish 0.087333 0.045317 [singlish] \n", "conformer-singlish 0.077186 0.03987 [singlish] \n", "large-conformer-singlish 0.069812 0.035723 [singlish] " ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "malaya_speech.stt.available_transducer()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lower is better. Mixed models tested on different dataset." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Google Speech-to-Text accuracy\n", "\n", "We tested on the same malay dataset to compare malaya-speech models and Google Speech-to-Text, check the notebook at [benchmark-google-speech-malay-dataset.ipynb](https://github.com/huseinzol05/malaya-speech/blob/master/pretrained-model/prepare-stt/benchmark-google-speech-malay-dataset.ipynb)." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'malay': {'WER': 0.164775, 'CER': 0.059732},\n", " 'singlish': {'WER': 0.4941349, 'CER': 0.3026296}}" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "malaya_speech.stt.google_accuracy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Again, even `large-conformer` beat google speech-to-text accuracy, we really need to be skeptical with the score, the test set and postprocessing might favoured for malaya-speech**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load RNNT model\n", "\n", "```python\n", "def deep_transducer(\n", " model: str = 'conformer', quantized: bool = False, **kwargs\n", "):\n", " \"\"\"\n", " Load Encoder-Transducer ASR model.\n", "\n", " Parameters\n", " ----------\n", " model : str, optional (default='conformer')\n", " Model architecture supported. Allowed values:\n", "\n", " * ``'tiny-conformer'`` - TINY size Google Conformer.\n", " * ``'small-conformer'`` - SMALL size Google Conformer.\n", " * ``'conformer'`` - BASE size Google Conformer.\n", " * ``'large-conformer'`` - LARGE size Google Conformer.\n", " * ``'conformer-stack-2mixed'`` - BASE size Stacked Google Conformer for (Malay + Singlish) languages.\n", " * ``'conformer-stack-3mixed'`` - BASE size Stacked Google Conformer for (Malay + Singlish + Mandarin) languages.\n", " * ``'small-conformer-singlish'`` - SMALL size Google Conformer for singlish language.\n", " * ``'conformer-singlish'`` - BASE size Google Conformer for singlish language.\n", " * ``'large-conformer-singlish'`` - LARGE size Google Conformer for singlish language.\n", "\n", " quantized : bool, optional (default=False)\n", " if True, will load 8-bit quantized model.\n", " Quantized model not necessary faster, totally depends on the machine.\n", "\n", " Returns\n", " -------\n", " result : malaya_speech.model.tf.Transducer class\n", " \"\"\"\n", "```" ] }, { "cell_type": "code", "execution_count": 30, "metadata": { "scrolled": true }, "outputs": [], "source": [ "small_model = malaya_speech.stt.deep_transducer(model = 'small-conformer')\n", "model = malaya_speech.stt.deep_transducer(model = 'conformer')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load Quantized deep model\n", "\n", "To load 8-bit quantized model, simply pass `quantized = True`, default is `False`.\n", "\n", "We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "WARNING:root:Load quantized model will cause accuracy drop.\n", "WARNING:root:Load quantized model will cause accuracy drop.\n" ] } ], "source": [ "quantized_small_model = malaya_speech.stt.deep_transducer(model = 'small-conformer', quantized = True)\n", "quantized_model = malaya_speech.stt.deep_transducer(model = 'conformer', quantized = True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Load sample" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "ceramah, sr = malaya_speech.load('speech/khutbah/wadi-annuar.wav')\n", "record1, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-36-06_294832.wav')\n", "record2, sr = malaya_speech.load('speech/record/savewav_2020-11-26_22-40-56_929661.wav')\n", "shafiqah_idayu, sr = malaya_speech.load('speech/example-speaker/shafiqah-idayu.wav')\n", "mas_aisyah, sr = malaya_speech.load('speech/example-speaker/mas-aisyah.wav')\n", "khalil, sr = malaya_speech.load('speech/example-speaker/khalil-nooh.wav')" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import IPython.display as ipd\n", "\n", "ipd.Audio(ceramah, rate = sr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As we can hear, the speaker speaks in kedahan dialects plus some arabic words, let see how good our model is." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ipd.Audio(record1, rate = sr)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ipd.Audio(record2, rate = sr)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ipd.Audio(shafiqah_idayu, rate = sr)" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ipd.Audio(mas_aisyah, rate = sr)" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", " \n", " " ], "text/plain": [ "" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ipd.Audio(khalil, rate = sr)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict using greedy decoder\n", "\n", "```python\n", "def greedy_decoder(self, inputs):\n", " \"\"\"\n", " Transcribe inputs using greedy decoder.\n", "\n", " Parameters\n", " ----------\n", " inputs: List[np.array]\n", " List[np.array] or List[malaya_speech.model.frame.Frame].\n", "\n", " Returns\n", " -------\n", " result: List[str]\n", " \"\"\"\n", "```" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 7.09 s, sys: 1.8 s, total: 8.88 s\n", "Wall time: 6.05 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',\n", " 'helo nama saya husin saya tak suka mandi ketat saya masak',\n", " 'helo nama saya husin saya suka mandi saya mandi tetek hari',\n", " 'nama saya syafiqah hidayah',\n", " 'sebut perkataan uncle',\n", " 'tolong sebut anti kata']" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "small_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 11.3 s, sys: 3.47 s, total: 14.8 s\n", "Wall time: 11.2 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',\n", " 'helo nama saya send saya tak suka mandi ke tak saya masam',\n", " 'helo nama saya husin saya suka mandi saya mandi setiap hari',\n", " 'nama saya syafiqah idayu',\n", " 'sebut perkataan angka',\n", " 'tolong sebut antika']" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 6.58 s, sys: 1.34 s, total: 7.92 s\n", "Wall time: 5.24 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',\n", " 'helo nama saya husin saya tak suka mandi ketat saya masak',\n", " 'helo nama saya husin saya suka mandi saya mandi tetek hari',\n", " 'nama saya syafiqah hidayah',\n", " 'sebut perkataan uncle',\n", " 'tolong sebut anti kata']" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_small_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 10.8 s, sys: 3.02 s, total: 13.8 s\n", "Wall time: 8.91 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',\n", " 'helo nama saya send saya tak suka mandi ke tak saya masam',\n", " 'helo nama saya pusing saya suka mandi saya mandi setiap hari',\n", " 'nama saya syafiqah idayu',\n", " 'sebut perkataan angka',\n", " 'tolong sebut antika']" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_model.greedy_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict using beam decoder\n", "\n", "```python\n", "def beam_decoder(self, inputs, beam_width: int = 5,\n", " temperature: float = 0.0,\n", " score_norm: bool = True):\n", " \"\"\"\n", " Transcribe inputs using beam decoder.\n", "\n", " Parameters\n", " ----------\n", " inputs: List[np.array]\n", " List[np.array] or List[malaya_speech.model.frame.Frame].\n", " beam_width: int, optional (default=5)\n", " beam size for beam decoder.\n", " temperature: float, optional (default=0.0)\n", " apply temperature function for logits, can help for certain case,\n", " logits += -np.log(-np.log(uniform_noise_shape_logits)) * temperature\n", " score_norm: bool, optional (default=True)\n", " descending sort beam based on score / length of decoded.\n", "\n", " Returns\n", " -------\n", " result: List[str]\n", " \"\"\"\n", "```" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 11.2 s, sys: 1.97 s, total: 13.2 s\n", "Wall time: 8.14 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',\n", " 'helo nama saya pusing saya tak suka mandi ketat saya masak',\n", " 'helo nama saya husin saya suka mandi saya mandi tetek hari',\n", " 'nama saya syafiqah hidayah',\n", " 'sebut perkataan uncle',\n", " 'tolong sebut anti kata']" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "small_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 21.3 s, sys: 3.23 s, total: 24.6 s\n", "Wall time: 13.6 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',\n", " 'helo nama saya pusing saya tak suka mandi ke tak saya masam',\n", " 'helo nama saya husin saya suka mandi saya mandi tiap tiap hari',\n", " 'nama saya syafiqah idayu',\n", " 'sebut perkataan angka',\n", " 'tolong sebut antika']" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 10.6 s, sys: 1.71 s, total: 12.3 s\n", "Wall time: 7.53 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni allah maha ini',\n", " 'helo nama saya pusing saya tak suka mandi ketat saya masak',\n", " 'helo nama saya husin saya suka mandi saya mandi tetek hari',\n", " 'nama saya syafiqah hidayah',\n", " 'sebut perkataan uncle',\n", " 'tolong sebut anti kata']" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_small_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 16.8 s, sys: 1.67 s, total: 18.5 s\n", "Wall time: 6.45 s\n" ] }, { "data": { "text/plain": [ "['jadi dalam perjalanan ini dunia yang susah ini ketika nabi mengajar muaz bin jabal tadi ni alah maaf ini',\n", " 'helo nama saya pusing saya tak suka mandi ke tak saya masam',\n", " 'helo nama saya pusing saya suka mandi saya mandi tiap tiap hari',\n", " 'nama saya syafiqah idayu',\n", " 'sebut perkataan angka',\n", " 'tolong sebut antika']" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_model.beam_decoder([ceramah, record1, record2, shafiqah_idayu, mas_aisyah, khalil], beam_width = 5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**RNNT model beam decoder not able to utilise batch processing, if feed a batch, it will process one by one**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Predict alignment\n", "\n", "We want to know when the speakers speak certain words, so we can use `predict_timestamp`,\n", "\n", "```python\n", "def predict_alignment(self, input, combined = True):\n", " \"\"\"\n", " Transcribe input and get timestamp, only support greedy decoder.\n", "\n", " Parameters\n", " ----------\n", " input: np.array\n", " np.array or malaya_speech.model.frame.Frame.\n", " combined: bool, optional (default=True)\n", " If True, will combined subwords to become a word.\n", "\n", " Returns\n", " -------\n", " result: List[Dict[text, start, end]]\n", " \"\"\"\n", "```" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 3.7 s, sys: 784 ms, total: 4.48 s\n", "Wall time: 4.11 s\n" ] }, { "data": { "text/plain": [ "[{'text': 'nama', 'start': 0.28, 'end': 0.57},\n", " {'text': 'saya', 'start': 0.68, 'end': 0.97},\n", " {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},\n", " {'text': 'idri', 'start': 1.8, 'end': 2.01}]" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "small_model.predict_alignment(shafiqah_idayu)" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 405 ms, sys: 84.8 ms, total: 489 ms\n", "Wall time: 128 ms\n" ] }, { "data": { "text/plain": [ "[{'text': 'nam', 'start': 0.28, 'end': 0.29},\n", " {'text': 'a_', 'start': 0.56, 'end': 0.57},\n", " {'text': 'say', 'start': 0.68, 'end': 0.69},\n", " {'text': 'a_', 'start': 0.96, 'end': 0.97},\n", " {'text': 'sya', 'start': 1.28, 'end': 1.29},\n", " {'text': 'fi', 'start': 1.44, 'end': 1.45},\n", " {'text': 'q', 'start': 1.52, 'end': 1.53},\n", " {'text': 'ah_', 'start': 1.68, 'end': 1.69},\n", " {'text': 'id', 'start': 1.8, 'end': 1.81},\n", " {'text': 'ri', 'start': 2.0, 'end': 2.01}]" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "small_model.predict_alignment(shafiqah_idayu, combined = False)" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 1.25 s, sys: 324 ms, total: 1.58 s\n", "Wall time: 348 ms\n" ] }, { "data": { "text/plain": [ "[{'text': 'jadi', 'start': 0.36, 'end': 0.53},\n", " {'text': 'dalam', 'start': 0.6, 'end': 0.73},\n", " {'text': 'perjalanan', 'start': 0.84, 'end': 1.33},\n", " {'text': 'ini', 'start': 1.4, 'end': 1.41},\n", " {'text': 'dunia', 'start': 2.44, 'end': 2.65},\n", " {'text': 'yang', 'start': 2.76, 'end': 2.81},\n", " {'text': 'susah', 'start': 2.88, 'end': 3.13},\n", " {'text': 'ini', 'start': 3.24, 'end': 3.25},\n", " {'text': 'ketika', 'start': 5.64, 'end': 5.85},\n", " {'text': 'nabi', 'start': 6.12, 'end': 6.37},\n", " {'text': 'mengajar', 'start': 6.44, 'end': 6.81},\n", " {'text': 'muaz', 'start': 6.96, 'end': 7.21},\n", " {'text': 'bin', 'start': 7.28, 'end': 7.29},\n", " {'text': 'jabal', 'start': 7.44, 'end': 7.73},\n", " {'text': 'tadi', 'start': 7.84, 'end': 8.05},\n", " {'text': 'ni', 'start': 8.12, 'end': 8.13},\n", " {'text': 'allah', 'start': 8.52, 'end': 8.69},\n", " {'text': 'maha', 'start': 8.8, 'end': 9.01},\n", " {'text': 'ini', 'start': 9.4, 'end': 9.41}]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "small_model.predict_alignment(ceramah)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 5.89 s, sys: 2.04 s, total: 7.93 s\n", "Wall time: 7.37 s\n" ] }, { "data": { "text/plain": [ "[{'text': 'nama', 'start': 0.28, 'end': 0.57},\n", " {'text': 'saya', 'start': 0.64, 'end': 0.97},\n", " {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},\n", " {'text': 'idayu', 'start': 1.8, 'end': 2.05}]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "model.predict_alignment(shafiqah_idayu)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 2.35 s, sys: 515 ms, total: 2.87 s\n", "Wall time: 593 ms\n" ] }, { "data": { "text/plain": [ "[{'text': 'jadi', 'start': 0.36, 'end': 0.53},\n", " {'text': 'dalam', 'start': 0.6, 'end': 0.73},\n", " {'text': 'perjalanan', 'start': 0.8, 'end': 1.29},\n", " {'text': 'ini', 'start': 1.4, 'end': 1.41},\n", " {'text': 'dunia', 'start': 2.44, 'end': 2.65},\n", " {'text': 'yang', 'start': 2.72, 'end': 2.81},\n", " {'text': 'susah', 'start': 2.88, 'end': 3.13},\n", " {'text': 'ini', 'start': 3.24, 'end': 3.25},\n", " {'text': 'ketika', 'start': 5.64, 'end': 5.85},\n", " {'text': 'nabi', 'start': 6.12, 'end': 6.37},\n", " {'text': 'mengajar', 'start': 6.44, 'end': 6.81},\n", " {'text': 'muaz', 'start': 6.96, 'end': 7.21},\n", " {'text': 'bin', 'start': 7.28, 'end': 7.29},\n", " {'text': 'jabal', 'start': 7.44, 'end': 7.73},\n", " {'text': 'tadi', 'start': 7.84, 'end': 8.05},\n", " {'text': 'ni', 'start': 8.12, 'end': 8.13},\n", " {'text': 'alah', 'start': 8.52, 'end': 8.69},\n", " {'text': 'maaf', 'start': 8.8, 'end': 9.01}]" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "model.predict_alignment(ceramah)" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 3.82 s, sys: 743 ms, total: 4.56 s\n", "Wall time: 4.19 s\n" ] }, { "data": { "text/plain": [ "[{'text': 'nama', 'start': 0.28, 'end': 0.57},\n", " {'text': 'saya', 'start': 0.68, 'end': 0.97},\n", " {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},\n", " {'text': 'idri', 'start': 1.8, 'end': 2.01}]" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_small_model.predict_alignment(shafiqah_idayu)" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "CPU times: user 5.59 s, sys: 1.85 s, total: 7.44 s\n", "Wall time: 6.8 s\n" ] }, { "data": { "text/plain": [ "[{'text': 'nama', 'start': 0.28, 'end': 0.57},\n", " {'text': 'saya', 'start': 0.64, 'end': 0.97},\n", " {'text': 'syafiqah', 'start': 1.28, 'end': 1.69},\n", " {'text': 'id', 'start': 1.8, 'end': 1.81}]" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "%%time\n", "\n", "quantized_model.predict_alignment(shafiqah_idayu)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "varInspector": { "cols": { "lenName": 16, "lenType": 16, "lenVar": 40 }, "kernels_config": { "python": { "delete_cmd_postfix": "", "delete_cmd_prefix": "del ", "library": "var_list.py", "varRefreshCmd": "print(var_dic_list())" }, "r": { "delete_cmd_postfix": ") ", "delete_cmd_prefix": "rm(", "library": "var_list.r", "varRefreshCmd": "cat(var_dic_list()) " } }, "types_to_exclude": [ "module", "function", "builtin_function_or_method", "instance", "_Feature" ], "window_display": false } }, "nbformat": 4, "nbformat_minor": 4 }