\n",
"\n",
"This tutorial is available as an IPython notebook at [malaya-speech/example/precision-mode](https://github.com/huseinzol05/malaya-speech/tree/master/example/precision-mode).\n",
" \n",
"
"
]
},
{
"cell_type": "markdown",
"id": "exciting-teens",
"metadata": {},
"source": [
"Let say you want to run the model in FP16, or FP64."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "resident-electric",
"metadata": {},
"outputs": [],
"source": [
"import malaya_speech\n",
"import logging\n",
"logging.basicConfig(level = logging.INFO)"
]
},
{
"cell_type": "markdown",
"id": "anonymous-night",
"metadata": {},
"source": [
"### Use specific precision for specific model\n",
"\n",
"To do that, pass `precision_mode` parameter to any load model function in Malaya-Speech,\n",
"\n",
"```python\n",
"malaya_speech.gender.deep_model(model = 'vggvox-v2', precision_mode = 'FP16')\n",
"```\n",
"\n",
"Supported precision mode is `{'BFLOAT16', 'FP16', 'FP32', 'FP64'}`, default is `FP32`, check code at https://github.com/huseinzol05/malaya-boilerplate/blob/main/malaya_boilerplate/frozen_graph.py"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "hidden-alfred",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"INFO:root:running gender/vggvox-v2 using device /device:CPU:0\n"
]
}
],
"source": [
"gender = malaya_speech.gender.deep_model(model = 'vggvox-v2')\n",
"# gender_fp16 = malaya_speech.gender.deep_model(model = 'vggvox-v2', precision_mode = 'FP16')"
]
},
{
"cell_type": "markdown",
"id": "scientific-flush",
"metadata": {},
"source": [
"**Not all operations supported FP16**."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}