Google Translate gets a boost from deep neural networks

Many of the technological nightmares we bring to life on film revolve around artificial intelligence gone wrong, usually somewhere around the year 2000. 2016 is almost over, though, and we don't live in a post-apocalyptic future—at least, not yet. AI still struggles to do things humans do with ease, like forming coherent sentences and playing games millions of people play easily. But Google has been making big strides in both of those areas this year. Case in point is a recent boost to Google Translate's performance thanks to deep neural networks.

Earlier this year, one of Google's deep neural networks tackled one of the toughest puzzles in artificial intelligence research when it defeated Lee Sedol, a Korean grandmaster in the game of Go. Go is relatively simple to learn, but difficult to master thanks to an emphasis on intuition and strategies that can take many turns to play out. Compared to the eight-by-eight chess board, Go uses a 19×19 grid. Until this year, AI could beat only the most amateur of Go players.

Now, Google has turned that same neural network technology toward language translation. Google says its technology—called Google Neural Machine Translation, or GNMT—reduces errors by 60 percent in Chinese translation while climbing even higher in other languages.

If you're feeling brave, you can check out the complete paper Google published on the subject, but here's a short breakdown. As Science Magazine explains, the deep neural net uses a technology that Google's team calls vectors that seem to  give it more room to understand context. "Cat" is more likely to be associated with "Dog" than to "Car," for example. The system is trained on pairs of translated sentences, builds the vectors and compares the input against them to come up with a set of likely translations. 

In contrast, Science Magazine notes that the translation method for most of the languages Translate offers today works based off of phrases. It takes user input and breaks it down into isolated chunks before it stitches together its best effort. This technique can create some truly outlandish results that are unusable at best and dangerous at worst, but it's the state of the art today.

According to Science Magazine's report, the Google Brain team chose Chinese as its first language to work with, not only because a big part of the team is Chinese, but also because the language is considered one of the most difficult to machine translate. While the 60% improvement statistic above is for the tech's attempts to translate from English to Chinese, Google says results in translation from English to Spanish is 87% more accurate with the new system, and French to English is 83% better. Google tells Science Magazine that it's using GNMT to perform Chinese to English translation now, and it'll gradually roll out the technology to other languages in the future.

We're not yet in the age of A Hitchhiker's Guide to the Galaxy's Babel Fish or Doctor Who's TARDIS. Google's Mike Schuster, an engineer on the project and one of the lead authors on the paper Google released this week, acknowledged when speaking to Wired that what the team has now isn't perfect, but added that "it is much, much better" than existing tech. Google hasn't set a date on when we'll see the tech rolling out to other languages, but maybe the Babel Fish future isn't so far off.

Comments closed
    • TheJack
    • 3 years ago

    Google is like a mixed bag. You have to love it and hate it at the same time. Love it because of all the free services it provides and hate it for all the data collection stuff. Good and bad hand in hand.

    • Bensam123
    • 3 years ago

    I don’t consider a AI ‘victory’ in a game like Go or Chess a step forward for AI learning… While it sounds impressive, rightfully so, it’s more about gaming the rules and getting a machine to work around the rules rather then actually teaching the machine how to learn.

    Learning itself has no rules. Instead of teaching a machine how to beat a opponent at Go they should be looking at how to get the machine to learn on it’s own. A toddler doesn’t start out by learning to play Go. It learns how to talk, walk, interact, and get what it wants… Often time figuring out what it wants. Hell people spend all life trying to figure out what they want and to get it.

    When I hear ‘Ruby the machine likes the color red’ and it’s not a completely arbitrary decision (the machine can tell you why it likes red and it’s not scripted), reset the machine and it gives you a different color on a different day, instead of ‘Machine beats opponent at Go’ we’ll be going the right way towards Skynet.

      • blastdoor
      • 3 years ago

      The ultimate goal of most research into AI is profit maximization, not the elimination of the human species. So long as selling useful products to humans is the primary objective, I think it makes a tremendous amount of sense to focus more narrowly on making machines that are very good at specific tasks, and can therefore be turned into specific products.

      So I think they’re doing exactly what they “should” be doing.

        • davidbowser
        • 3 years ago

        Agreed. Chess, Go, and Jeopardy make the news, but the REAL money is spent on making money and almost NEVER even makes press releases.

        I just saw Nathan Wiebe speak a couple weeks ago about Quantum Computing research. His topic? Quantum chemistry research to simulate a bacteria-based process to replace the industrial Haber process. The Haber process to produce ammonia (and the constituent feeder processes) consumes about 1-2% of of the worlds energy production yearly. That one process is worth about a gazillion bazillion dollars… or more realistically around $60 Billion per year.

        [url<][/url<] [url<][/url<] [url<][/url<]

      • Voldenuit
      • 3 years ago

      Unlike many chess programs, which calculate all possible moves, Alphago selects its moves with a combination of pattern matching, “some” tree-branch predictions, and learned moves.

      Alphago’s machine learning component was developed by having the system play against instances of itself and against human players, so it is not a canned response.

      • fix
      • 3 years ago

      Toddler learns because it “wants” to survive. By “wants” I mean evolutionary necessity: those who did not learn did not survive. Machines do not have this necessity yet. I think if we can build an AI that wants to survive and has means to improve itself, perhaps expand its capacity somehow, we will start seeing the AI learn in a way the toddler learns.

        • Ninjitsu
        • 3 years ago

        An AI that wants to “survive” = starting of Skynet!

Pin It on Pinterest

Share This