## What's new? |

## lm for asr - 컴퓨터 |

0. bos vs nobos

1. peephole: TF2.0 can be simple solution for this..

2. generative accuracy uni-lstm vs bi-lstm: 10/1 using ptb.. ?

- https://medium.com/@david.campion/text-generation-using-bidirectional-lstm-and-doc2vec-models-1-3-8979eb65cb3a

3. Reviewing NIPs paper: 10/1

- https://papers.nips.cc/paper/5651-bidirectional-recurrent-neural-networks-as-generative-models.pdf

1. peephole: TF2.0 can be simple solution for this..

2. generative accuracy uni-lstm vs bi-lstm: 10/1 using ptb.. ?

- https://medium.com/@david.campion/text-generation-using-bidirectional-lstm-and-doc2vec-models-1-3-8979eb65cb3a

3. Reviewing NIPs paper: 10/1

- https://papers.nips.cc/paper/5651-bidirectional-recurrent-neural-networks-as-generative-models.pdf

written time : 2019-09-30 22:42:56.0

## adding a new lambda layer for keras models in multi-gpu env. - 컴퓨터 |

git: https://github.com/sephiroce/kmlm, commit id: 1578f99

To input variable-length sequences into CuDNNLSTM layers, I needed to build-up a lambda function.

The return value of the lambda function was a logprob which is a scalar.

I faced "Can't+concatenate+scalars+(use+tf.stack+instead)" ...

The solution was to expand the value using tf.expand and I modified to use y_pred[0] not y_pred.

in lambda function.

import keras.backend as K

loss = tf.reduce_sum(full_logprob * seq_mask)

return K.expand_dims(loss, axis=0)

when compiling the models.

model.compile(loss={Constants.KEY_CCE:lambda y_true, y_pred: y_pred[0]},

optimizer=optimizer)

the problem seems to be solved.

To input variable-length sequences into CuDNNLSTM layers, I needed to build-up a lambda function.

The return value of the lambda function was a logprob which is a scalar.

I faced "Can't+concatenate+scalars+(use+tf.stack+instead)" ...

The solution was to expand the value using tf.expand and I modified to use y_pred[0] not y_pred.

in lambda function.

import keras.backend as K

loss = tf.reduce_sum(full_logprob * seq_mask)

return K.expand_dims(loss, axis=0)

when compiling the models.

model.compile(loss={Constants.KEY_CCE:lambda y_true, y_pred: y_pred[0]},

optimizer=optimizer)

the problem seems to be solved.

written time : 2019-09-23 23:44:09.0

## Important dates - 일상 |

Conferences in 2020

Icassp 21st October 2019

ACL 9th December 2019

Interspeech late Feb 2020

Conferences in 2021

EMNLP early July 2020

AAAI End of Aug in 2020

ICML mid-Jan 2020

NIPS mid-May 2020

ICLR mid-Sep 2020

Icassp 21st October 2019

ACL 9th December 2019

Interspeech late Feb 2020

Conferences in 2021

EMNLP early July 2020

AAAI End of Aug in 2020

ICML mid-Jan 2020

NIPS mid-May 2020

ICLR mid-Sep 2020

written time : 2019-09-15 22:39:13.0