We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Human dexterity is an impressive multi-layered skill that requires both the use of our body's advanced mobility and our eyes' complex and extremely efficient vision processes. Researchers have been trying to instill this skill in robots for a while now with great difficulty, particularly in relation to computer vision.
Some breakthroughs in the sector in recent years have seen robots capable of making basic distinctions between objects, allowing them to pick them up. However, those advancements are rudimentary at best as the systems' limited understanding of what they have chosen means they can do nothing beyond finding an object.
No previous input required
New work out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) may soon change this. The lab has introduced a new system that empowers robots to not only visually recognize objects, but to also do so so well they can then proceed to accomplish related tasks, all without any previous input.
The researchers have called this crucial development in machine vision Dense Object Nets (DON). DON functions by analyzing objects as collections of points on a visual roadmap, a process that allows the system to understand all the object's components even if it has never seen it before.
This means DON can autonomously do very specific tasks such as grab an object from just one of its corners or parts, an ability previous systems lacked. "Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said in a statement paper co-author and PhD student Lucas Manuelli.
The fact that DON has overcome that issue may eventually see the system become invaluable to the manufacturing lines of retail giants. However, that is just one potential future application. As the system continues to evolve it could very well have near unlimited uses.
Since DON does not require data to be labeled by humans, the system can learn and even supervise itself independently. One example of the many tasks that DON could one day excel at would be that of cleaning a messy house, said the researchers.
What will be left for us to do?
"As narrow AI applications broaden to consume more human tasks we can imagine a future where a humanoid robot will cook dinner, clean the kitchen, do the dishes, and fold the laundry," Chief AI Officer and co-founder of Ziff.AI Ben Taylor told IE regarding this development. "These types of tasks, which felt like science fiction, are moving closer to becoming a reality. The real question I have, is what will we do with the free time?"
Director of Dacian Consulting Andrei Luchici further told IE he believes the system may be the beginning of a revolutionary trend for the industry. "Previous machine vision systems, albeit very powerful, only recognized what objects were present in an image but were not able to act on that information," Luchici explained.
"DON solves that problem which means that we can now start to build increasingly more complex systems of smart agents that can teach themselves how to recognize and interact with different objects. I believe that Tedrake lab’s results are going to start a new wave of computer vision applications from robotic manipulation and process control to new intelligent automation solutions," he concluded.