Hardware Acceleration with Tunable SRAM/IMC Voltages
Deep Neural Networks (DNNs) are the fundamental component in most artificial intelligence applications. With the increasing number of applications based on artificial intelligence, the performance and energy efficiency of architec...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto ACROBAT
Duración del proyecto: 49 meses
Fecha Inicio: 2022-07-26
Fecha Fin: 2026-08-31
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Deep Neural Networks (DNNs) are the fundamental component in most artificial intelligence applications. With the increasing number of applications based on artificial intelligence, the performance and energy efficiency of architectures running these algorithms have become crucial, especially for battery-powered platforms. In this work, I propose an energy optimizing memory design framework with a special SRAM/in-memory-computing structure. It also utilizes datapath optimization techniques like quantization and pruning with a fine-level assignment. Compared to other hardware accelerator studies for DNN processing, in this work, I will show that this special memory design, together with the architectural datapath optimization techniques, will have a much better capability of finding the Pareto optimal point in the energy-accuracy trade-off and increase the profitability of the final design.