One robot, Robobrain, has processed more than 1 billion images, 100 million appliance manuals, and 120,000 YouTube videos. The training is to teach the robots where and how to grasp objects, a challenge because videos, images and documents are only two dimensional. A seven-page paper (PDF) published by the researchers explains one method for funneling visual data into the minds of robots.
How are researchers teaching their robots?
Answer: with YouTube
Researchers at the University of Maryland and NICTA in Australia are refining the knowledge of artificially intelligent robots by showing them still images and YouTube videos that demonstrate how to accomplish manual tasks, like slicing a tomato or hitting a nail with a hammer.
One robot, Robobrain, has processed more than 1 billion images, 100 million appliance manuals, and 120,000 YouTube videos. The training is to teach the robots where and how to grasp objects, a challenge because videos, images and documents are only two dimensional. A seven-page paper (PDF) published by the researchers explains one method for funneling visual data into the minds of robots.
One robot, Robobrain, has processed more than 1 billion images, 100 million appliance manuals, and 120,000 YouTube videos. The training is to teach the robots where and how to grasp objects, a challenge because videos, images and documents are only two dimensional. A seven-page paper (PDF) published by the researchers explains one method for funneling visual data into the minds of robots.