Visually guided grasping and its effects on visual representations
I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an exten...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
PCI2019-103447
BENCHMARKS FOR UNDERSTANDING GRASPING
111K€
Cerrado
PCI2019-103386
PERCEPCION-ACCION-APRENDIZAJE INTERACTIVO PARA MODELAR OBJET...
111K€
Cerrado
TED2021-132003B-I00
FUSION DE APRENDIZAJE AUTOMATICO Y SIMULACION PARA DIGITALIZ...
282K€
Cerrado
DPI2011-27846
MECANISMOS DE COORDINACION VISUOMOTORA Y APRENDIZAJE EN UN T...
185K€
Cerrado
PSI2010-15867
LA RESOLUCION TEMPORAL DEL CONTROL SENSORIO-MOTOR: COMO OPTI...
121K€
Cerrado
FLEXBOT
Flexible object manipulation based on statistical learning a...
1M€
Cerrado
Información proyecto VisualGrasping
Duración del proyecto: 28 meses
Fecha Inicio: 2018-03-01
Fecha Fin: 2020-07-02
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an extensive literature describing the multifaceted features of visually guided grasping. I aim to bind this fragmented body of knowledge into a unified framework for understanding how humans visually select grasps. To do so I will use motion-tracking hardware (already in place at the University of Giessen) to measure and model human grasping patterns to 3D objects. I will rely on Dr. Fleming’s unique expertise with physical simulation to simulate human grasping with objects varying in shape and material. Joining behavioral measurements with computer simulations will provide a powerful data- and theory-driven approach to fully map out the space of human grasping behavior. The complementary goal of this proposal is to understand how grasping constrains visual processing of object shape and material. I plan to tackle this goal by building a computational model of visual processing for grasp planning. Both Dr. Fleming and I have previous experience with computational modelling of visual function. I will exploit powerful machine learning techniques to infer what kinds of visual representations are necessary for grasp planning. I will train Deep Neural Nets (for which the hardware and software is already in place and in use by the Fleming lab) using extensive physics simulations. Dissecting the learned network architecture and comparing the network’s performance to human behavior will tell us what information about shapes, material, and objects the human visual system encodes to plan motor actions. In short, with this research I aim to determine how processing within the human visual system is shaped by and guides hand motor action.