RoboExNovo - ERC-2014_StG
Duration:
Principal investigator(s):
Project type:
Funding body:
Project identification number:
Abstract
While today’s robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably face novel situations in unconstrained settings, and thus will always have knowledge gaps. This calls for robots able to learn continuously about objects by themselves. The learning paradigm of state-of-the-art robots is the sensorimotor toil, i.e. the process of acquiring knowledge by generalization over observed stimuli. This is in line with cognitive theories that claim that cognition is embodied and situated, so that all knowledge acquired by a robot is specific to its sensorimotor capabilities and to the situation in which it has been acquired. Still, humans are also capable of learning from externalized sources – like books, illustrations, etc – containing knowledge that is necessarily unembodied and unsituated. To overcome this gap, RoboExNovo proposes a paradigm shift. I will develop a new generation of robots able to acquire perceptual and semantic knowledge about object from externalized, unembodied resources, to be used in situated settings. As the largest existing body of externalized knowledge, I will consider the Web as the source from which to learn from. To achieve this, I propose to build a translation framework between the representations used by robots in their situated experience and those used on the Web, based on relational structures establishing links between related percepts and between percepts and the semantics they support. My leading expertise in machine learning applied to multi modal data and robot vision puts me in a strong position to realize this project.
Structures
Partners
- Fondazione IIT - Istituto Italiano di Tecnologia - Coordinator
Budget
PoliTo total cost: | € 60,733.47 |
---|---|
PoliTo contribution: | € 60,733.47 |