are you me?    id passwd

status  

 sleepy

picture

 

 thinking

calender

naive but intuitive(kaldi): saving cmvn(fbank+deltadelta) to npy - 컴퓨터

#!/bin/bash
#0. prep using kaldi script
#spk2utt, utt2spk
#wav.scp

#1. fbank for all utterance + utt2spk
compute-fbank-feats --num-mel-bins=80 --sample-frequency=16000 --use-log-fbank=True scp:wav.scp ark:- | add-deltas ark:- ark,scp:feats.ark,feats.scp

#2. cmvn using fbank feats and utt2spk
compute-cmvn-stats --spk2utt=ark:spk2utt scp:feats.scp ark,scp:cmvn.ark,cmvn.scp
apply-cmvn --utt2spk=ark:utt2spk scp:cmvn.scp scp:feats.scp ark,scp:normed_feats.ark,normed_feat.scp

#3. save cmvn applied fbanks to npy
copy-feats scp:normed_feat.scp ark,t:normed_feats.txt
python3 parsing.py normed_feats.txt


parsing.py

import os
import sys
import numpy as np
feat_path = open(sys.argv[1])
status = 0
utt_id = ""
feats = ""

idx = 0
for line in feat_path:
  if status == 0 and "[" in line:
    idx += 1
    print(idx)
    utt_id = line.strip().split()[0]
    status = 1
  elif status == 1:
    feats += line.replace("]","").strip() + "
"
    if "]" in line:
      with open(utt_id + ".npy.txt","w") as npy_file:
        npy_file.write(feats.strip())
      np.save(utt_id + ".npy", np.loadtxt(utt_id + ".npy.txt"))
      os.remove(utt_id + ".npy.txt")
      status = 0
      feats = ""
      utt_id = ""

feat_path.close()

written time : 2020-03-27 12:55:01.0

Feeding TFRecord to a Custom Training Loop (CTL) with Mirrored Strategy. - 컴퓨터

Yesterday I was trying to apply MirroredStrategy to my TF2.1 based training script. But there are tons of unexpected errors.

First I try to follow the tutorial https://www.tensorflow.org/tutorials/distribute/custom_training. The example was perfectly working. However, the way was not applied to my source as I expected. I thought the issue was caused by TFRecord. It is partially true but The real problem was that I did not carefully read "Alternate ways of iterating over a dataset" in that tutorial.

Now it leaves a warning message but it works fine.
WARNING:tensorflow:Efficient allreduce is not supported for 1 IndexedSlices

I use two RTX2080 Ti cards, actually, two GPUs are not powerful than I expected.
As for training speech transformer using WSJ, it showed only 1.5x faster training speed than that with single GPU training.
with one GPU card, 770 secs for the first epoch and 625 secs for remained epochs.
with two GPU cards, 540 secs for the first epoch and 389 secs for remained epochs.

Anyway, I spent the whole Sunday doing this. I'm happy at last but for what..?

written time : 2020-03-09 17:39:56.0

about multiple gpus - 컴퓨터

when your multiple gpu code is not workingking.
from https://github.com/tensorflow/tensorflow/issues/36510

TF_FORCE_GPU_ALLOW_GROWTH=true

of course, you can also set this option on in your source code.
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
for device in gpu_devices:
  tf.config.experimental.set_memory_growth(device, True)

if you want to check the status of nvlink.

nvidia-smi nvlink --status

written time : 2020-03-07 21:39:48.0
...  10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |  ...