Published on Mon Sep 28 2020

Aspects of Terminological and Named Entity Knowledge within Rule-Based Machine Translation Models for Under-Resourced Neural Machine Translation Scenarios

Daniel Torregrosa, Nivranshu Pasricha, Maraim Masoud, Bharathi Raja Chakravarthi, Juan Alonso, Noe Casas, Mihael Arcan

Rule-based machine translation is a machine translation paradigm where linguistic knowledge is encoded by an expert. While this approach grants extensive control over the output of the system, the cost of formalising it is much higher than training a corpus-based system.

0
0
0
Abstract

Rule-based machine translation is a machine translation paradigm where linguistic knowledge is encoded by an expert in the form of rules that translate text from source to target language. While this approach grants extensive control over the output of the system, the cost of formalising the needed linguistic knowledge is much higher than training a corpus-based system, where a machine learning approach is used to automatically learn to translate from examples. In this paper, we describe different approaches to leverage the information contained in rule-based machine translation systems to improve a corpus-based one, namely, a neural machine translation model, with a focus on a low-resource scenario. Three different kinds of information were used: morphological information, named entities and terminology. In addition to evaluating the general performance of the system, we systematically analysed the performance of the proposed approaches when dealing with the targeted phenomena. Our results suggest that the proposed models have limited ability to learn from external information, and most approaches do not significantly alter the results of the automatic evaluation, but our preliminary qualitative evaluation shows that in certain cases the hypothesis generated by our system exhibit favourable behaviour such as keeping the use of passive voice.

Fri Nov 01 2019
NLP
On the Linguistic Representational Power of Neural Machine Translation Models
Deep neural networks in natural language processing (NLP) remain a challenge. We analyze the representations learned by neural machine translation models at various levels of granularity and evaluate their quality. We show that deep NMT models learn a non-trivial amount of linguistic information.
0
0
0
Tue Aug 07 2018
NLP
Design Challenges in Named Entity Transliteration
We analyze some of the fundamental design challenges that impact the development of a multilingual state-of-the-art named entity transliteration system. We empirically evaluate the transliterated task using traditional weighted finite state transducer (WFST) approach against two neural approaches. In
1
15
27
Fri Aug 18 2017
NLP
Neural machine translation for low-resource languages
Neural machine translation (NMT) approaches have improved the state of the art in many machine translation settings. We find that while SMT remains the best option for low-resource settings, our method can produce acceptable translations with only 70000 tokens of training data.
0
0
0
Thu Sep 07 2017
NLP
Translating Terminological Expressions in Knowledge Bases with Neural Machine Translation
The challenge of translating ontology labels or terminological expressions documented in knowledge bases lies in the highly specific vocabulary and the lack of contextual information. We evaluate the translation quality of terminological expressions in the medical and financial domain with statistical and neural machine translation methods.
0
0
0
Thu Feb 28 2019
Machine Learning
Non-Parametric Adaptation for Neural Machine Translation
Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process. We propose a novel n-gram level retrieval approach that relies on local phrase level similarities. We complement this with an expressive neural network to extract information from the noisy retrieved context.
0
0
0
Tue Oct 15 2019
NLP
On the Importance of Word Boundaries in Character-level Neural Machine Translation
Neural Machine Translation (NMT) models generally perform translation using afixed-size lexical vocabulary. The standard approach to overcome this limitation is to segment words into subword units. Recent studies have shown that the same approach can be extended to perform NMT directly at the level of characters.
0
0
0