According to biblical legend, humans in a time before our time at one point attempted to build a tower so tall and magnanimous as to reach the heavens. Hundreds of thousands of men worked on the tower, known as the Tower of Babel; but God, despising humanity’s ambition to reach his abode, laid a curse upon the people so it would all come crumbling down: He cut the entire project by its roots by destroying its very foundation, the organisation of the workers. God made the men speak different languages from each other, creating chaos and confusion, ultimately bringing down mankind’s dreams of reaching heaven forever. This, according to Abrahamic mythology, is the origin of the many languages spoken today, and why some people can’t understand each other even though they may look alike and live in the same land.
This myth was inspired by the thousands of years old perception that languages, while useful to communicate with those who understand you, can create problems with those who don’t. Language is the mother of culture, and different cultures have been fighting for dominance over those they see as foreign or inferior for ages – the word “barbarian” itself has its origin in the ancient Greeks, who said the languages of the non-Greek savages were no more than grunts and sounded like “bar-bar-bar-bar”. Language barrier is so potentially divisive that whole empires have fallen because of it, and even today, in a deeply interconnected world, it causes misunderstandings and factionalism. Recently, however, the always ambitious Google has announced a tool that may be the first generation of what could one day end the language barrier altogether: the GNMT System.
The GNMT, an acronym for “Google Neural Machine Translation”, is a new technology that uses Artificial Neural Networks (something Google has been using for quite some time now) to learn the connection between an input phrase (a sentence spoken in language A) and its related output (a sentence heard in language B). While older tools such as online translators break up sentences into individual words and rearrange them into a roughly equivalent sentence in the desired language, – very often creating nonsensical constructions – the Neural Machine considers the entire phrase as an unit of translation to output something that would make sense in the target language. This behaviour mimics how humans perceive language; while machines tend to process everything in units (in this case, words), the human brain processes context and ideas to create meaning.
It also aims to solve a big problem in modern translation: namely, the infamously mismatched pair that is English to Mandarin Chinese and vice-versa. As of the public announcement of the new technology, Google has been using the GMNT System in their translator, raising the quality of the roughly 18 million translations made everyday in the two languages.
It is not, however, perfect. The GMNT still suffers from many issues that plague digital translation, such as disregarding context in paragraphs and focusing on the phrases themselves and mistranslating uncommon terms or proper names, but it is a massive step in the direction of universal translation. By applying a loose interpretation of Moore’s law, it’s safe to say that the next generations of Neural Translation software will become incrementally more refined and sophisticated, possibly even generating flawless translations by the end of the century. While the new system still isn’t even accurate and fast enough to be used for every language in Google Translate, only time will tell what its equivalent will look like a mere twenty years from now.