This folder contains examples for speech recognition.
DataIterover speech data.
AMISDM1 dataset. Can be used as a template for writing other configuration files.
Connect to Kaldi:
simplemethod for decoding).
A full receipt:
To reproduce the results, use the following steps.
Build Kaldi as shared libraties if you have not already done so.
cd kaldi/src ./configure --shared # and other options that you need make depend make
cd kaldi/src/python_wrap/ make
The acoustic models use Mel filter-bank or MFCC as input features. It also need to use Kaldi to do force-alignment to generate frame-level labels from the text transcriptions. For example, if you want to work on the
SDM1. You can run
kaldi/egs/ami/s5/run_sdm.sh. You will need to do some configuration of paths in
kaldi/egs/ami/s5/run_sdm.sh before you can run the examples. Please refer to Kaldi's document for more details.
run_sdm.sh script generates the force-alignment labels in their stage 7, and saves the force-aligned labels in
exp/sdm1/tri3a_ali. The default script generates MFCC features (13-dimensional). You can try training with the MFCC features, or you can create Mel filter bank features by your self. For example, a script like this can be used to compute Mel filter bank features using Kaldi.
#!/bin/bash -u . ./cmd.sh . ./path.sh # SDM - Signle Distant Microphone micid=1 #which mic from array should be used? mic=sdm$micid # Set bash to 'debug' mode, it prints the commands (option '-x') and exits on : # -e 'error', -u 'undefined variable', -o pipefail 'error in pipeline', set -euxo pipefail # Path where AMI gets downloaded (or where locally available): AMI_DIR=$PWD/wav_db # Default, data_dir=$PWD/data/$mic # make filter bank data for dset in train dev eval; do steps/make_fbank.sh --nj 48 --cmd "$train_cmd" $data_dir/$dset \ $data_dir/$dset/log $data_dir/$dset/data-fbank steps/compute_cmvn_stats.sh $data_dir/$dset \ $data_dir/$dset/log $data_dir/$dset/data apply-cmvn --utt2spk=ark:$data_dir/$dset/utt2spk \ scp:$data_dir/$dset/cmvn.scp scp:$data_dir/$dset/feats.scp \ ark,scp:$data_dir/$dset/feats-cmvn.ark,$data_dir/$dset/feats-cmvn.scp mv $data_dir/$dset/feats-cmvn.scp $data_dir/$dset/feats.scp done
apply-cmvn was for mean-variance normalization. The default setup was applied per speaker. A more common was doing mean-variance normalization for the whole corpus and then feed to the neural networks:
compute-cmvn-stats scp:data/sdm1/train_fbank/feats.scp data/sdm1/train_fbank/cmvn_g.ark apply-cmvn --norm-vars=true data/sdm1/train_fbank/cmvn_g.ark scp:data/sdm1/train_fbank/feats.scp ark,scp:data/sdm1/train_fbank_gcmvn/feats.ark,data/sdm1/train_fbank_gcmvn/feats.scp
Note that kaldi always try to find features in
feats.scp. So make sure the normalized features organized as Kaldi way during decoding.
Finally, you need to put the features and labels together in a file so that MXNet can find them. More specifically, for each data set (train, dev, eval), you will need to create a file like
train_mxnet.feats, will the following contents:
TRANSFORM scp:feat.scp scp:label.scp
TRANSFORM is the transformation you want to apply to the features. By default we use
scp: syntax is from Kaldi. The
feat.scp is typically the file from
data/sdm1/train/feats.scp, and the
label.scp is converted from the force-aligned labels located in
exp/sdm1/tri3a_ali. Because the force-alignments are only generated on the training data, we split the training set into 90/10 parts, and use the 1/10 hold-out as the dev set (validation set). The script run_ami.sh will automatically do the splitting and format the file for MXNet. Please set the path in that script correctly before running. The run_ami.sh script will actually run the full pipeline including training the acoustic model and decoding. So you can skip the following steps if that scripts successfully runs.
default.cfgand edit necessary items like the path to the dataset you just prepared.
python train_lstm.py --configfile=your-config.cfg. You can do
python train_lstm.py --helpto see the helps. All the configuration parameters can be set in
default.cfg, customized config file, and through command line (e.g.
--train_batch_size=50), and the latter values overwrite the former ones.
Here are some example outputs that we got from training on the TIMIT dataset.
Example output for TIMIT: Summary of dataset ================== bucket of len 100 : 3 samples bucket of len 200 : 346 samples bucket of len 300 : 1496 samples bucket of len 400 : 974 samples bucket of len 500 : 420 samples bucket of len 600 : 90 samples bucket of len 700 : 11 samples bucket of len 800 : 2 samples Summary of dataset ================== bucket of len 100 : 0 samples bucket of len 200 : 28 samples bucket of len 300 : 169 samples bucket of len 400 : 107 samples bucket of len 500 : 41 samples bucket of len 600 : 6 samples bucket of len 700 : 3 samples bucket of len 800 : 0 samples 2016-04-21 20:02:40,904 Epoch Train-Acc_exlude_padding=0.154763 2016-04-21 20:02:40,904 Epoch Time cost=91.574 2016-04-21 20:02:44,419 Epoch Validation-Acc_exlude_padding=0.353552 2016-04-21 20:04:17,290 Epoch Train-Acc_exlude_padding=0.447318 2016-04-21 20:04:17,290 Epoch Time cost=92.870 2016-04-21 20:04:20,738 Epoch Validation-Acc_exlude_padding=0.506458 2016-04-21 20:05:53,127 Epoch Train-Acc_exlude_padding=0.557543 2016-04-21 20:05:53,128 Epoch Time cost=92.390 2016-04-21 20:05:56,568 Epoch Validation-Acc_exlude_padding=0.548100
The final frame accuracy was around 62%.
python make_stats.py --configfile=your-config.cfg | copy-feats ark:- ark:label_mean.ark(edit necessary items like the path to the training dataset). It will generate the label counts in
./run_ami.sh --model prefix model --num_epoch num.
Here are the results on TIMIT and AMI test set (using all default setup, 3 layer LSTM with projection layers):
Note that for AMI 42.2 was evaluated non-overlapped speech. Kaldi-HMM baseline was 67.2% and DNN was 57.5%.
We had updated this demo on Feb 07 (kaldi c747ed5, mxnet 912a7eb). We had also added timit demo script in this folder.
To run the timit demo: