Publication Search Form




We found publication with these paramters.

Quality estimation as a Proxy to Machine Translation Evaluation

Lucia Specia, Raj Dhwaj, Marco Turchi
Most evaluation metrics for machine translation (MT) require reference translations for each sentence in order to produce a score reflecting certain aspects of its quality. The de facto metrics, BLEU and NIST, are known to have good correlation with human evaluation at the corpus level, but this is not the case at the segment level. As an attempt to overcome these two limitations, we address the problem of evaluating the quality of MT as a prediction task, where reference-independent features are extracted from the input sentences and their translation, and a quality score is obtained based on models produced from training data. We show that this approach yields better correlation with human evaluation as compared to commonly used metrics, even with models trained on different MT systems, language-pairs and text domains.
Machine Translation, Volume 24, Number 1, pages 39-50.
Full paper available here