An in memory dataflow accelerator for deep learning
Deep neural networks (DNNs), loosely inspired by biological neural networks, consist of parallel processing units called neurons interconnected by plastic synapses. By tuning the weights of these interconnections, these networks a...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto MEMFLUX
Duración del proyecto: 22 meses
Fecha Inicio: 2021-02-02
Fecha Fin: 2022-12-31
Líder del proyecto
IBM RESEARCH GMBH
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
150K€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Deep neural networks (DNNs), loosely inspired by biological neural networks, consist of parallel processing units called neurons interconnected by plastic synapses. By tuning the weights of these interconnections, these networks are able to perform certain cognitive tasks remarkably well. DNNs are being deployed all the way from cloud data centers to edge servers and even end devices and is projected to be a tens of billion Euro-market just for semiconductor companies in the next few years. There is a significant effort towards the design of custom ASICs based on reduced precision arithmetic and highly optimized dataflow. However, one of the primary reasons for the inefficiency, namely the need to shuttle millions of synaptic weight values between the memory and processing units, remains unaddressed. In-memory computing is an emerging computing paradigm that addresses this challenge of processor-memory dichotomy. For example, a computational memory unit with resistive memory (memristive) devices organized in a crossbar configuration is capable of performing matrix-vector multiply operations in place by exploiting the Kirchhoff’s circuits laws. Moreover, the computational time complexity reduces to O(1). The goal of this project is to prototype such an in-memory computing accelerator for ultra-low latency, ultra-low power DNN inference.