Imagine robots that could swing a hammer or saw a piece of wood. Okay, so there are machines that do that now. But can they do it, say, on a roof, or standing on a ladder? That Jetsons-style future may not be too far off.
Google--it of the driverless car--is developing robots that can grasp a pen or other random objects using what is basically hand-eye coordination for humanoids. TechCrunch's Frederic Lardinois has all the details, including two cool videos.
Google is now using these robots to train a deep convolutional neural network (a technique that’s all the rage in machine learning right now) to help its robots predict the outcome of their grasps based on the camera input and motor commands. It’s basically hand-eye coordination for robots.
The team says that it took about 3,000 hours of practice (and 800,000 grasp attempts) before it saw “the beginnings of intelligent reactive behaviors.”
“The robot observes its own gripper and corrects its motions in real time. It also exhibits interesting pre-grasp behaviors, like isolating a single object from a group,” the team writes. “All of these behaviors emerged naturally from learning, rather than being programmed into the system.”
Google Teaches Robots to Reach for Objects
With the right hand-eye coordination, could a robot help construct a house?