tree: 352fb838bf36d0e17342971ae74f8f0534281483 [path history] [tgz]
  1. README.md
  2. arch_deepspeech.py
  3. config_util.py
  4. deepspeech.cfg
  5. default.cfg
  6. flac_to_wav.sh
  7. label_util.py
  8. log_util.py
  9. main.py
  10. resources/
  11. singleton.py
  12. stt_bi_graphemes_util.py
  13. stt_bucketing_module.py
  14. stt_datagenerator.py
  15. stt_io_bucketingiter.py
  16. stt_io_iter.py
  17. stt_layer_batchnorm.py
  18. stt_layer_conv.py
  19. stt_layer_fc.py
  20. stt_layer_gru.py
  21. stt_layer_lstm.py
  22. stt_layer_slice.py
  23. stt_layer_warpctc.py
  24. stt_metric.py
  25. stt_utils.py
  26. train.py
example/speech_recognition/README.md

deepSpeech.mxnet: Rich Speech Example

This example based on DeepSpeech2 of Baidu helps you to build Speech-To-Text (STT) models at scale using

  • CNNs, fully connected networks, (Bi-) RNNs, (Bi-) LSTMs, and (Bi-) GRUs for network layers,
  • batch-normalization and drop-outs for training efficiency,
  • and a Warp CTC for loss calculations.

In order to make your own STT models, besides, all you need is to just edit a configuration file not actual codes.


Motivation

This example is intended to guide people who want to making practical STT models with MXNet. With rich functionalities and convenience explained above, you can build your own speech recognition models with it easier than former examples.


Environments

  • MXNet version: 0.9.5+
  • GPU memory size: 2.4GB+
  • Install tensorboard for logging
  • Warp CTC: Follow this instruction to install Baidu's Warp CTC.
  • We strongly recommend that you first test a model of small networks.

How it works

Preparing data

Input data are described in a JSON file Libri_sample.json as followed.

You can download two wave files above from this. Put them under /path/to/yourproject/Libri_sample/.

Setting the configuration file

[Notice] The configuration file “default.cfg” included describes DeepSpeech2 with slight changes. You can test the original DeepSpeech2(“deepspeech.cfg”) with a few line changes to the cfg file:


Run the example

Train

Checkpoints of the model will be saved at every n-th epoch.

Load

You can (re-) train (saved) models by loading checkpoints (starting from 0). For this, you need to modify only two lines of the file “default.cfg”.

Predict

You can predict (or test) audios by specifying the mode, model, and test data in the file “default.cfg”.


Train and test your own models

Train and test your own models by preparing two files.

  1. A new configuration file, i.e., custom.cfg, corresponding to the file ‘default.cfg’. The new file should specify the items below the ‘[arch]’ section of the original file.
  2. A new implementation file, i.e., arch_custom.py, corresponding to the file ‘arch_deepspeech.py’. The new file should implement two functions, prepare_data() and arch(), for building networks described in the new configuration file.

Run the following line after preparing the files.


Further more

You can prepare full LibriSpeech dataset by following the instruction on https://github.com/baidu-research/ba-dls-deepspeech
Change flac_to_wav.sh script of baidu to flac_to_wav.sh in repository to avoid bug

git clone https://github.com/baidu-research/ba-dls-deepspeech
cd ba-dls-deepspeech
./download.sh
cp -f /path/to/example/flac_to_wav.sh ./
./flac_to_wav.sh
python create_desc_json.py /path/to/ba-dls-deepspeech/LibriSpeech/train-clean-100 train_corpus.json
python create_desc_json.py /path/to/ba-dls-deepspeech/LibriSpeech/dev-clean validation_corpus.json
python create_desc_json.py /path/to/ba-dls-deepspeech/LibriSpeech/test-clean test_corpus.json