MATERIAL program seeks help for analysts searching “low resource” languages.

The Intelligence Advanced Research Projects Agency (IARPA), the US Intelligence Community’s own science and technology research arm, has announced it is seeking contenders for a program to develop what amounts to the ultimate Google Translator. IARPA’s Machine Translation for English Retrieval of Information in Any Language (MATERIAL) program intends to provide researchers and analysts with a tool to search for documents in their field of concern in any of the more than 7,000 languages spoken worldwide.

The specific goal, according to IARPA’s announcement, is an “‘English-in, English-out’ information retrieval system that, given a domain-sensitive English query, will retrieve relevant data from a large multilingual repository and display the retrieved information in English as query-biased summaries.” Users would be able to search vast numbers of documents with a two-part query: the first giving the “domain” of the search in terms of what sort of information they are seeking (for example, “Government,” “Science,” or “Health”) and the second an English word or phrase describing the information sought (the examples given in the announcement were “zika virus” and “Asperger's syndrome”).

So-called “low resource” languages have been an area of concern for the intelligence and defense communities for years. In 2014, the Defense Advanced Research Project Agency (DARPA) launched its Low Resource Languages for Emergent Incidents (LORELEI) project, an attempt to build a system that lets the military quickly collect critical data—such as “topics, names, events, sentiment, and relationships”—from sources in any language on short notice. The system would be used in situations like natural disasters or military interventions in remote locations where the military has little or no local language expertise.

The problem with most current translation tools is that they require significant training against the target language—a process that can take a long time to refine and is highly dependent on the level of expertise of the trainers. There’s also often a huge variation between formal and informal usage in languages and variation of meaning in different fields of writing. To get reliable translation of text based on all of these variables could take years of language-specific training and development.

Doing so for every language in a single system—even to just get a concise summary of what a document is about, as MATERIAL seeks to do—would be a tall order. Which is why one of the goals of MATERIAL, according to the IARPA announcement, “is to drastically decrease the time and data needed to field systems capable of fulfilling an English-in, English-out task.”

Those taking on the MATERIAL program will be given access to a limited set of machine translation and automatic speech recognition training data from multiple languages ”to enable performers to learn how to quickly adapt their methods to a wide variety of materials in various genres and domains,” the announcement explained. “As the program progresses, performers will apply and adapt these methods in increasingly shortened time frames to new languages... Since language-independent approaches with quick ramp up time are sought, foreign language expertise in the languages of the program is not expected.”

The good news for the broader linguistics and technology world is that IARPA expects the teams competing on MATERIAL to publicly publish their research. If successful, this moonshot for translation could radically change how accessible materials in many languages are to the rest of the world.