Yesterday I was trying to apply MirroredStrategy to my TF2.1 based training script. But there are tons of unexpected errors.
First I try to follow the tutorial https://www.tensorflow.org/tutorials/distribute/custom_training. The example was perfectly working. However, the way was not applied to my source as I expected. I thought the issue was caused by TFRecord. It is partially true but The real problem was that I did not carefully read "Alternate ways of iterating over a dataset" in that tutorial.
Now it leaves a warning message but it works fine.
WARNING:tensorflow:Efficient allreduce is not supported for 1 IndexedSlices
I use two RTX2080 Ti cards, actually, two GPUs are not powerful than I expected.
As for training speech transformer using WSJ, it showed only 1.5x faster training speed than that with single GPU training.
with one GPU card, 770 secs for the first epoch and 625 secs for remained epochs.
with two GPU cards, 540 secs for the first epoch and 389 secs for remained epochs.
Anyway, I spent the whole Sunday doing this. I'm happy at last but for what..?