Editors note: We’d like to invite people with interesting machine learning and data analysis applications to explain the techniques that are working for them in the real world on real data. Accentuate.us is an open-source browser addon that uses machine learning techniques to make it easier for people around the world to communicate.
Many languages around the world use the familiar Latin alphabet (A-Z), but in order to represent the sounds of the language accurately, their writing systems employ diacritical marks and other special characters. For example:
- Vietnamese (Mọi người đều có quyền tự do ngôn luận và bầy tỏ quan điểm),
- Hawaiian (Ua noa i nā kānaka apau ke kūʻokoʻa o ka manaʻo a me ka hōʻike ʻana i ka manaʻo),
- Ewe (Amesiame kpɔ mɔ abu tame le eɖokui si eye wòaɖe eƒe susu agblɔ faa mɔxexe manɔmee),
- and hundreds of others.
Speakers of these languages have difficulty entering text into a computer because keyboards are often not available, and even when they are, typing special characters can be slow and cumbersome. Also, in many cases, speakers may not be completely familiar with the “correct” writing system and may not always know where the special characters belong. The end result is that for many languages, the texts people type in emails, blogs, and social networking sites are left as plain ASCII, omitting any special characters, and leading to ambiguities and confusion.
To solve this problem, we have created a free and open source Firefox add-on called Accentuate.us that allows users to type texts in plain ASCII, and then automatically adds all diacritics and special characters in the correct places–a process we call “Unicodification”. Accentuate.us uses a machine learning approach, employing both character-level and word-level models trained on data crawled from the web for more than 100 languages.
It is easiest to describe our algorithm with an example. Let’s say a user is typing Irish (Gaelic), and they enter the phrase nios mo muinteoiri fiorchliste with no diacritics. For each word in the input, we check to see if it is an “ascii-fied” version of a word that was seen during training.
- In our example, for two of the words, there is exactly one candidate unicodification in the training data: nios is the asciification of the word níos which is very common in our Irish data, and muinteoiri is the asciification of múinteoirí, also very common. As there are no other candidates, we take níos and múinteoirí as the unicodifications.
- There are two possibilities for mo; it could be correct as is, or it could be the asciification of mó. When there is an ambiguity of this kind, we rely on standard word-level n-gram language modeling; in this case, the training data contains many instances of the set phrase níos mó, and no examples of níos mo, so mó is chosen as the correct answer.
- Finally, the word fiorchliste doesn’t appear at all in our training data, so we resort to a character-level model, treating each character that could admit a diacritic as a classification problem. For each language, we train a naive Bayes classifier using trigrams (three character sequences) in a neighborhood of the ambiguous character as features. In this case, the model classifies the first “i” as needing an acute accent, and leaves all other characters as plain ASCII, thereby (correctly) restoring fiorchliste to fíorchliste.
The example above illustrates the ability of the character-level models to handle never-before-seen words; in this particular case fíorchliste is a compound word, and the character sequences in the two pieces fíor and chliste are relatively common in the training data. It is also an effective way of handling morphologically complex languages, where there can be thousands or even millions of forms of any given root word, so many that one is lucky to see even a small fraction of them in a training corpus. But the chances of seeing individual morphemes is much higher, and these are captured reasonably well by the character-level models.
We are far from the first to have studied this problem from the machine learning point of view (full references are given in our paper), but this is the first time that models have been trained for so many languages, and made available in a form that will allow widespread adoption in many language communities.
We have done a detailed evaluation of the performance of the software for all of the languages (all the numbers are in the paper) and this raised a number of interesting issues.
First, we were only able to do this on such a large scale because of the availability of training text on the web in so many languages. But experience has shown that web texts are much noisier than texts found in traditional corpora–does this have an impact on the performance of a statistical systems? The short answer appears to be “yes,” at least for the problem of unicodification. In cases where we had access to high quality corpora of books and newspaper texts, we achieved substantially better performance.
Second, it is probably no surprise that some languages are much harder than others. A simple baseline algorithm is to simply leave everything as plain ASCII, and this performs quite well for languages like Dutch which have only a small number of words containing diacritics (this baseline get 99.3% of words correct for Dutch). In Figure 1 we plot the word-level accuracy of Accentuate.us against this baseline.
But recall there are really two models at play, and we could ask about the relative contribution of, say, the character-level model to the performance of the system. With this in mind, we introduce a second “baseline” which omits the character-level model entirely. More precisely, given an ASCII word as input, it chooses the most common unicodification that was seen in the training data, and leaves the word as ASCII if there were no candidate unicodifications in the training data. In Figure 2 we plot the word-level accuracy of Accentuate.us against this improved baseline. We see that the contribution of the character model is really quite small in most cases, and not surprisingly several of the languages where it helps the most are morphologically quite complex, like Hungarian and Turkish (though Vietnamese is not). In quite a few cases, the character model actually hurts performance, although our analyses show that this is generally due to noise in the training data: a lot of noise in web texts is English (and hence almost pure ASCII) so the baseline will outperform any algorithm that tries to add diacritics.
The Firefox add-on works by communicating with the Accentuate.us web service via its stable API, and we have a number of other clients including a vim plugin (written by fellow St. Louisan Bill Odom) and Perl, Python, and Haskell implementations. We hope that developers interested in supporting language communities around the world will consider integrating this service in their own software.
Please feel free to contact us with any questions, comments, or suggestions.