Google’s AI Translation Tool Creates Its Own Secret Language

Google’s Neural Machine Translation system had gone live back in September. It uses deep learning to produce better, more natural translations between languages. The company’s AI team calls it the Google Neural Machine Translation system, or GNMT, and it initially provided a less resource-intensive way to ingest a sentence in one language and produce that same sentence in another language. Instead of digesting each word or phrase as a standalone unit, as prior methods do, GNMT takes in the entire sentence as a whole.

GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them? They made this helpful gif to illustrate the idea of what they call “zero-shot translation” (it’s the orange one):

translate1.gif

As it turns out — the answer is yes! It produces “reasonable” translations between two languages that it has not explicitly linked in any way. Remember, no English allowed.

But this raised the second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

This can mean that the computer has developed its own internal language to represent concepts it is using to between other languages.

transcape.png

A Visualization of the translation system’s memory when translating a single sentence in multiple directions

A visualization of the translation system’s memory when translating a single sentence in multiple directions.

In some cases, Google says its GNMT system is even approaching human-level translation accuracy. That near-parity is restricted to transitions between related languages, like from English to Spanish and French. However, Google is eager to gather more data for “notoriously difficult” use cases, all of which will help its system learn and improve over time thanks to machine learning techniques. So starting today, Google is using its GNMT system for 100 percent of Chinese to English machine translations in the Google Translate mobile and web apps, accounting for around 18 million translations per day.

Google admits that its approach still has ways to go. “GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms,” Le and Schuster explain, “and translating sentences in isolation rather than considering the context of the paragraph or page. There is still a lot of work we can do to serve our users better.” Over time this will improve and it may be a lot more efficient.

 

Sources: (TechCrunch, The Verge)

The First Chip Without Semiconductors

Researchers from Applied Electromagnetics Group at the University of California San Diego have succeeded in building a microelectronic device without the incorporation of semiconductors for the first time at least in a published paper.

electron-semi-conductor-2016-11-08-03.jpg

The team published its findings in Nature Communications this week explaining how they were able to engineer an optically-controlled device that bypassed by the use of semiconductors by thinking of and employing a metasurface in the nanoscale that avoids the limits that superconductors bring to electron flow or, perhaps more familiar, conductivity. Electrons have a pesky habit of running into a number of atoms on their way to a given point in a semiconductor, but the team worked out a vacuum tube 2.0 that avoids the limitations placed on power handling and speed that semiconductors bring.

(In Layman’s Terms) Semiconductors based on silicon and other materials are great, obviously, having helped us squeeze billions of transistors into a few square inches. But they have some issues: Electron velocity is limited by the resistance of semiconductor materials, and a boost of energy is required to just to get them flowing through the “band gap” caused by the insulating properties of semiconductors like silicon. Vacuum tubes don’t have those problems, since they dislodge free electrons to carry (or not) a current through space. Getting free electrons at nanoscale sizes is problematic, however — you need either high voltage (over 100 volts), high temperatures or a powerful laser to knock them loose. The UC San Diego team solved that problem by building gold “mushroom” nanostructures with adjacent parallel gold strips (above). By combining a relatively low amount of voltage (10 volts) with a low-powered laser, they were able to dislodge electrons from the gold metal.

Moore’s Law dictates that computing speeds double every two years and are meant to see a consequent drop in cost and the team recognizes that they are on to something impractical beyond their existing accomplishment.

 

Source: UC San Diego