LAST MARCH, A computer built by a team of Google engineers beat one of the world’s top players at the ancient game of Go. The match between AlphaGo and Korean grandmaster Lee Sedol was so exhilarating, so upsetting, and so unexpectedly powerful, we turned it into a cover story for the magazine. On a Friday in late April, we were about an hour away from sending this story to the printer when I got an email.

According to the email, Lee had won all five matches—and all against top competition—since his loss to AlphaGo. Even as it surpasses human talents, AI can also pull humans to new heights—a theme that ran through our magazine story. After playing AlphaGo, Lee said the machine opened his eyes to new ways of playing the ancient game, and indeed, it had. We needed to get his latest wins into the story. But we also had a problem: the source of this news was in Korean, and no one in our office spoke the language. We ran it the through Google Translate, but it spat out some English that didn’t quite make sense. We had to find a second source.

We did, just in time. And today, as Google rolls out a new incarnation of its translation software, it comes with a certain irony. Online translation couldn’t help our story on the new wave in artificial intelligence, but the new wave in artificial intelligence is improving online translation. The technology that underpinned AlphaGo—deep neural networks—is now playing a very big role on Google Translate.

Modeled after the way neurons connect in the human brain, deep neural networks are the same breed of AI technology that identifies commands spoken into Android phones and recognizes people in photos posted to Facebook, and the promise is that it will reinvent machine translation in much the same way. Google says that with certain languages, its new system—dubbed Google Neural Machine Translation, or GNMT—reduces errors by 60 percent.

For now, it only translates from Chinese into English—perhaps a key translation pair in Google’s larger ambitions. But the company plans to roll it out for the more than 10,000 language pairs now handled by Google Translate. “We can train this whole system in an end-to-end fashion. That makes it much easier for [Google] to focus on reducing the final error rate.” says Google engineer Mike Schuster, one of the lead authors on the paper Google released on the tech today and a member of the Google Brain team, which oversees the company’s AI work. “What we have now is not perfect. But you can tell that it is much, much better.”

All the big Internet giants are moving in the same direction, training deep neural nets using translations gathered from across the Internet. Neural nets already drive small parts of the best online translation systems, and the big players know that deep learning is the way to do it all. “We’re racing against everyone,” says Peter Lee, who oversees a portion of the AI work at Microsoft Research. “We’re all on the verge.”

They’re all moving to this method not only because they can improve machine translation, but because they can improve it in a much faster and much broader way. “The key thing about neural network models is that they are able to generalize better from the data,” says Microsoft researcher Arul Menezes. “With the previous model, no matter how much data we threw at them, they failed to make basic generalizations. At some point, more data was just not making them any better.”

For machine translation, Google is using a form of deep neural network called an LSTM, short for long short-term memory. An LSTM can retain information in both the short and the long term—kind of like your own memory. That allows it learn in more complex ways. As it analyzes a sentence, it can remember the beginning as it gets to the end. That’s different from Google’s previous translation method, Phrase-Based Machine Translation, which breaks sentences into individual words and phrases. The new method looks at the entire collection of words.

Of course, researchers have been trying to get LSTM to work on translation for years. The trouble with LSTMs for machine translation was that they couldn’t operate at the pace we have all come to expect from online service. Google finally got it to work at speed—fast enough to run a service across the Internet at large. “Without doing lots of engineering work and algorithmic work to improve the models,” says Microsoft researcher Jacob Devlin, “the speed is very much slower than traditional models.”

According to Schuster, Google has achieved this speed partly through changes to the LSTMs themselves. Deep neural networks consists of layer after layer of mathematical calculations—linear algebra—with the results of one layer feeding into the next. One trick Google uses is to start the calculations for the second layer before the first layer is finished—and so on. But Schuster also says that much of the speed is driven by Google’s tensor processing units, chips the company specifically built for AI. With TPUs, Schuster says, the same sentence that once took ten seconds to translate via this LSTM model now takes 300 milliseconds.

Like the other big Internet companies, Google trains its neural nets using graphics processing units, chips designed to render images visual applications like games. Its new machine translation system trains for about a week on about 100 GPU cards, each equipped with a few hundred individual chips. Then the specialized chips execute the model.

Google is unique in building its own chip for this task. But others are moving in a similar direction. Microsoft uses programmable chips called FPGAs to execute neural neural networks, and companies like Baidu are exploring other types of silicon. All these companies are racing towards the same future—working not just to improve machine translation, but to build AI systems that can understand and respond to natural human language. As Google’s new Allo messaging app shows, these “chat bots” are still flawed. But neural networks are rapidly changing what’s possible. “None of this is solved,” Schuster says. “But there is a constant upward tick.” Or as Google says the Chinese would say: “Yǒu yīgè bùduàn xiàngshàng gōu.”