AND JUST LIKE that, humanity draws one step closer to the singularity, the moment when the machines grow so advanced that humans become obsolete: A robot has learned to autonomously assemble an Ikea chair without throwing anything or cursing the family dog.

Researchers report today in Science Robotics that they’ve used entirely off-the-shelf parts—two industrial robot arms with force sensors and a 3-D camera—to piece together one of those Stefan Ikea chairs we all had in college before it collapsed after two months of use. From planning to execution, it only took 20 minutes, compared to the human average of a lifetime of misery. It may all seem trivial, but this is in fact a big deal for robots, which struggle mightily to manipulate objects in a world built for human hands.

To start, the researchers give the pair of robot arms some basic instructions—like those cartoony illustrations, but in code. This piece goes first into this other piece, then this other, etc. Then they place the pieces in a random pattern front of the robots, which eyeball the wood with the 3-D camera. So the researchers give the robots a list of tasks, then the robots take it from there.

“What the robot does is to first figure out where exactly is the original position of the frame,” says engineer Quang-Cuong Pham of Nanyang Technological University in Singapore, “and then calculates the motion of the two arms automatically to go and grasp it and transport it.”

As one arm grasps, say, the back of the chair, the other arm picks up one of those infernal wooden pegs and tries inserting it into a hole at the joint. That 3-D camera only has an accuracy of a few millimeters, so the robot has to feel around. The robot makes swirling motions around the hole, and when it feels the force pattern change, it knows the peg has dropped in slightly, then will apply more force to fully insert the thing.

This, though, is where the robot tends to have problems. If it hasn’t scanned the hole accurately enough, it might start swirling too far away—all the way over the edge of the piece. “Then the changes in force pattern are the same, so it would think that it has found the hole and it would go and insert in the void,” says Pham.

Matters grow more complicated when the robot arms have to grip either end of a larger piece of the chair. Not only does each robot arm have to calculate its own grasping and lifting motion, but it has to do so in consideration of the other arm. Think if you grasped the ends of a baseball bat and swirled it around—each arm is restricted by the movements of the other.

The stakes are even higher for the robot because it’s making calculations as it’s eyeballing the pieces, and has to commit to the plan it works out. “If there is a small error, for example in the modeling of the object, then the arms would fight each other, pulling this direction and the other pulling in another direction,” says Pham. “If that happens the robot will break the object.”

The solution is the force sensors. “When we sense that the force is too much, then it would change the motion of the robot to accommodate the errors,” Pham adds.

Pretty impressive stuff, but the fact remains that the researchers have to do a good amount of hand-holding. "This is a nice result,” says UC Berkeley’s Ken Goldberg, who works in robotic manipulation. “The big challenge is to replace such carefully engineered special purpose programming with new approaches that could learn from demonstrations and/or self-learn to perform tasks like this."