Natural Language Generation through character-based RNNs with finite-state prior knowledge
Raghav Goyal, Marc Dymetman, Eric Gaussier
Recently Wen et al. (2015) have proposed a Recurrent Neural Network (RNN) approach to the
generation of utterances from dialog acts, and shown that although their model requires less effort
to develop than a rule-based system, it is able to improve certain aspects of the utterances, in
particular their naturalness. However their system employs generation at the word-level, which
requires one to pre-process the data by substituting named entities with placeholders. This preprocessing
prevents the model from handling some contextual effects and from managing multiple
occurrences of the same attribute.
Our approach uses a character-level model, which unlike the word-level model makes it possible
to learn to “copy” information from the dialog act to the target without having to pre-process
the input. In order to avoid generating non-words and inventing information not present in the
input, we propose a method for incorporating prior knowledge into the RNN in the form of a
weighted finite-state automaton over character sequences. Automatic and human evaluations
show improved performance over baselines on several evaluation criteria.
Coling, Osaka, Japan, December 11-16, 2016.