When building, there are lots of different paths to the same ends.

Research has traditionally been broken down along simple lines. There's basic, fundamental research into how the world works, and there's applied research that attempts to take these insights and make something useful out of them. The two have very different end goals and require very different approaches to the research process.

But there's a large gray area in between, where the approach is more applied but the end goal may be little more than "make something cool": things like tiny flying robots or 3D computer displays that rely on beads levitated by lasers. How do researchers find direction for these open-ended engineering challenges?

The answer is "it depends." That's the answer we got when we had the chance to visit some of Columbia University's robotics labs.

Sailing west

Hod Lipson's lab is inspired by a seemingly simple question he posed at one point in our interview: "Can you 3D print a robot that can walk out of the printer, batteries included?" To get there, however, his approach is to head into the unknown—"just sailing west," as he put it. As a result, our time in the lab included discussions of projects like using drones to identify crops that are being damaged by infestations and the use of 3D printers to create elaborate food.

The 3D-printed food came about because people in the lab used food like cream cheese as a substitute for printing materials with similar viscosities. The substitution let them test whether certain designs were workable while using a cheap, disposable material. But the team eventually stopped disposing of it, added a greater variety of printing ingredients, and started experimenting with "cooking" their creations in a precise, 3D form using lasers.

But, while Lipson is happy for his lab to go on diversions, he does care about robotics and, specifically, where it comes up short. Highlighting a series of outtakes from a DARPA robotics challenge, he said that "AI has not paid its dues yet in the real world," and he tells Ars that plenty of software "could walk in a simulator but not in reality." One of his solutions is to give the software better artificial muscles to work with.

One solution his lab has come up with resembles the printed food in that it relies on cheap, common materials. The muscle is made by mixing a liquid silica gel with alcohol. When the gel sets, it traps lots of alcohol bubbles within it. When current is run through an embedded heating wire, this alcohol vaporizes, causing the silica to expand dramatically. Lipson lab member Aslan Miriyev showed how it was possible to create one of these artificial muscles in about a half-hour.

While this method naturally generates an expansive force, it's also possible to convert it to contraction. Miriyev embedded the "muscle" in a strong mesh normally used to house computer cables. As it expanded, the mesh expanded sideways with it, forcing its two ends to contract toward each other. While gradual, Lipson said the change eventually ended up driving a contraction equal to six times what an equivalent muscle could generate.

Focused on feeling

If Lipson only had a general sense of where he'd like the research to head, his colleague Matei Ciocarlie had a clear vision: Luke's robotic hand at the end of The Empire Strikes Back. Ciocarlie talked about how it conveyed the sensation of being poked, allowing Luke to react. But our fingers' sense of touch do more than that. Fingers relay information on how much pressure we're exerting and whether we've got a good grip on a surface. Current robotic hands don't have that level of sensation, but Ciocarlie wants to change that.

His idea for changing that involves embedding small light sources and optical sensors along the edges of a polymer. In their normal state, these create a stable pattern of light within the polymer that can be read out by the sensors. But as the polymer distorts—by being poked or stretched—the pattern of light reaching each sensor will change, with the details depending on where the changes are taking place. This creates a complicated pattern of changes in the light coming into the sensors, which may be detecting various combinations of sources at any one time.

Ciocarlie doesn't even try to calculate how to work backward from these changes to determine where on the polymer that force was being exerted. Instead, his team trains neural networks using a robotic poker that allows the neural network to correlate the optical changes with specific physical changes. When a network is trained, it's able to take in the optical data and figure out what's happening with the polymer.

While the researchers aren't yet ready to put the polymer on the surface of an artificial hand yet, they've got other grip-related projects going at the same time. Ciocarlie showed off a gripper that could track how much force it's exerting, as well as a single piece of hardware that could adopt several different grips despite having only a single actuator. He's even working on a bit of hardware that stroke victims can strap on to help them re-learn how to control their hands.

But there's a single theme running through the lab's work and some clear locations they'd like to get to. It's easy to see how some of these projects could be combined into a single piece of hardware.

Overall, it was hard not to be impressed by the work being done in both these labs, even if their approaches were dramatically different. The gray area between basic and applied research allows the people who occupy it a great deal of freedom.