User-guided learning opens up opportunity for shenanigans.

Google Translate—the Web and mobile tool that performs machine-learning-based translation of over 100 languages—has a small problem: to some degree, it depends on the kindness of strangers, both directly and indirectly. And that dependence can be gamed for amusing (or enraging) result, as we discovered today while working on a story about North Korea's recent ballistic missile launches.

When using Google Translate's live feature—which performs machine-learning-driven translation of text viewed through a mobile device's camera—to translate an article in the North Korean periodical Tongil Sinbo, we discovered that the feature translated the Korean characters for "Supreme Leader" as "Mr. Squidward," as shown in the image above.

"Supreme Leader" is the title used for North Korea's leader, Kim Jong Un. "Mr. Squidward" is the formalized way to address a character from the cartoon Spongebob Squarepants.

When translating from a static image, Google Translate properly reads the Korean characters in question.
This sort of comedic alteration of translation is possible because the machine-learning engine behind the system learns from a corpus of web sources, and suggested "better" translations offered by users, and those "improved" translations could be given greater weight in languages that do not have a large corpus of translations to work from. It's essentially the Translate version of the old practice of "Google bombing": changing Google search results for specific names or phrases by using them to link to particular webpages. If a phrase is associated with a specific entity, Translate will assume that is what the phrase means. And that may not even be the result of intentional manipulation.

Most artificial intelligence systems rely on "supervised learning" in some form. As we depend more and more on human-guided machine learning to help us with tasks (such as translation, customer service, predictive analytics, and driving us around), the opportunity for malicious manipulation of machine-learning algorithms will continue to increase, especially for edge cases.

Ars reached out to Google for comment on the mistranslation, but we did not receive a response.