US Army Lab Develops a Way to Read Soldiers' Brains

Military intelligence analysts spend a lot of time scrutinizing countless images from various sources, such as drones and surveillance systems. An automated program developed by cognitive neuroscientist Dr. Anthony Ries, however, can make the process a lot faster. Ries works for a US Army research facility called the "The MIND (Mission Impact Through Neurotechnology Design) Lab," which has just began testing a program that can interpret brain waves. In simpler words: it can read human minds. During a recent test, he hooked up a soldier to an EEG connected to one of the lab's desktop computers and asked him to look at a series of images on screen flashing at a rate of one per second. Each image falls under one of five categories -- boats, pandas, strawberries, butterflies and chandeliers.

The computer revealed by the end of the experiment that the soldier chose to focus on images that fall under the boat category. How did it know? By taking note of the changes in the subject's brain waves. The soldier produced distinct brain wave patterns whenever he looked at something he deemed "relevant." In time, analysts can use the system to view large images cut up into smaller sections (called chips) to quickly find items of interest.

Ries explains:

Whenever the Soldier or analyst detects something they deem important, it triggers this recognition response. Only those chips that contain a feature that is relevant to the Soldier at the time -- a vehicle, or something out of the ordinary, somebody digging by the side of the road, those sorts of things -- trigger this response of recognizing something important.

For now, the scientist plans to continue improving the system and adding new features, including eye control. In fact, he already tested the capability at the same time by asking a soldier to play a simple video game on a separate computer. The subject was instructed to shoot a bubble at a cluster of other bubbles and to aim for the same color just by moving his eyeballs, which he successfully did.

One thing we have done is instead of having people view images at the center of the screen, we're leveraging eye-tracking to know whenever they fixate on a particular region of space. We can extract the neural signal, time-locked to that fixation, and look for a similar target response signal. Then you don't have to constrain the image to the center of the screen. Instead, you can present an image and the analyst can manually scan through it and whenever they fixate on an item of interest, that particular region can be flagged.