So said João Graça, Unbabel’s co-founder and CTO, to me the other day as we were having coffee. Truth is, machines don’t always get it right and we all know how embarrassing a bad translation can be.
But how do we know when an automatic machine translation is bad? Is it possible to know where things have gone wrong? It turns out there’s a whole field of study about this, and both João and André Martins, Unbabel’s Head of Research, are two of the world’s leading researchers in it.
I sat down with both to discuss Unbabel’s award winning Quality Estimation system and how it works, as well as our automatic post editing tools. Two topics which are going to be discussed at AMTA 2018 (Association for Machine Translation in the Americas) in a workshop led by the Unbabel team on 21 March 2018.
What is Translation Quality Estimation?
The goal of Quality Estimation is to evaluate a translation system’s quality without access to reference translations. According to André Martins, it can be used in many different ways:
- Informing an end user about the reliability of translated content
- Deciding if a translation is ready for publishing or if it requires human post-editing
- Highlighting the words that need to be changed
“The idea here is to deliver a fast translation and to reduce its costs”.
But, how does our Translation Quality Estimation system work?
Unbabel’s Award Winning Translation Quality Estimation
We’ve been working on Translation Quality Estimation and automatic post-editing tools since Unbabel was founded nearly 5 years ago, both enabling us to provide human-quality translations at the scale of machine translation.
In João Graça’s words, “we have an award-winning Quality Estimation system in place at Unbabel that guarantees that if a translation is not good to go it gets reviewed by our community of 55,000 editors, who can correct the mistakes very quickly and provide a high-quality translation to our customers. And the more we translate, the more the system learns and the less mistakes it makes.“
This makes the Quality Estimation system one of the key component’s of Unbabel’s translation pipeline.
So, how is this possible?
“We check the corrections made by the editors to the text from machine translation and with this data we can understand the type of corrections the editors usually make. This allows us to detect patterns that help us understand when we have similar texts to know exactly what needs to be edited before involving humans in the process.” — André Martins.
If the translation has a good score then it gets sent to the customer without ever involving humans in the process. However, when the score is low the system then identifies the words that are incorrect, enabling human post-editors to pay special attention to the parts of sentences that need to be changed.
But that’s not all. “We have also developed tool called Smartcheck that searches for grammatical errors or something that is not aligned with the guidelines that the customer gave us”, explained André.
What about automatic post-editing?
“You can think of quality estimation as a way to detect mistakes in the translation and of automatic post-editing as a way to correct those mistakes”, said André Martins.
At Unbabel, we have also combined Quality Estimation with automatic post-editing and we’ve seen huge benefits from those two technologies being trained or stacked together.
“Given the similarity between Quality Estimation and Automatic Post-Editing, we decided to join our efforts to see how we could achieve better results. So, we grouped with Marcin Junczys-Dowmunt, from the Adam Mickiewicz University and combined their automatic post-editing system with our Quality Estimation system. The results were quite impressive. We improved our previous best word-level score from 49.5% to a new state-of-the-art, 57.5%, and we were able to build a quality score system for sentences.” — explained João Graça.
Unbabel’s Workshop at AMTA 2018
Quality Estimation is a topic that is often discussed in research but, according to João Graça, “not that much within the industry”. So, the idea for this workshop is to “gather people who work on QE and go through how it is been used across many different systems”, as João told me.
This will allows everyone to better understand what’s the future for Quality Estimation and make it more useful for the industry.
If you’re interested to find out more about Unbabel’s Quality Estimation system, take a closer look, and if you’re going to be in Boston at the end of march join the AMTA 2018 workshop.
- machine translation
- The Future
- Translation Quality