Innovating Works

BiTFormer

Financiado
Biologically Plausible Transformers - Integrating Top-Down and Bottom-Up Signals...
Biologically Plausible Transformers - Integrating Top-Down and Bottom-Up Signals in the Primary Vision System for Computationally Efficient Deep Learning Deep learning (DL) has recently achieved remarkable success due to the continuous growth in model sizes. However, this growth has led to increased energy consumption. Hardware implementation of digital DL can help reduce energy us... Deep learning (DL) has recently achieved remarkable success due to the continuous growth in model sizes. However, this growth has led to increased energy consumption. Hardware implementation of digital DL can help reduce energy usage, but the Von Neumann architecture of current DL has hindered its practical realization. In contrast, the brain exhibits energy-efficient multiscale spatiotemporal processing. Biologically plausible (BiP) frameworks have emerged as alternatives to mainstream DL. These methods use bottom-up and top-down signals, incorporating feedforward and feedback mechanisms, and local objectives instead of global error. Recently, I demonstrated that a BiP opto-analog hardware can achieve competitive performance compared to digital DL for feedforward networks. However, transformers, the backbone of current DL, are challenging to implement due to the input-dependent quadratic complexity in the transformer's attention. This project leverages the multiscale dynamics in the primary vision system to explore BiP architectures for transformers. The project is hosted at the University of Tübingen under Matthias Bethge and Thomas Euler, who have a long-standing effort in the system identification of mouse retina via DL. The project has three objectives. First, I will extract top-down information from neural recordings of ganglion cells in the mouse retina, focusing on unique spatiotemporal features that maximally activate specific cell types. Next, I will combine top-down signals with bottom-up models of the retina using recurrent architectures with linear complexity and compare their performance in classification tasks to a vision transformer for the retina. Lastly, I propose a BiP transformer with local weight updates. I will examine the robustness of models under data distribution shifts and noise injection. A positive outcome of the project will address energy and cost issues of AI and help me progress my academic career in this interdisciplinary field. ver más
30/09/2026
UT
174K€
Perfil tecnológico estimado
Duración del proyecto: 29 meses Fecha Inicio: 2024-04-29
Fecha Fin: 2026-09-30

Línea de financiación: concedida

El organismo HORIZON EUROPE notifico la concesión del proyecto el día 2024-04-29
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
Presupuesto El presupuesto total del proyecto asciende a 174K€
Líder del proyecto
EBERHARD KARLS UNIVERSITAET TUEBINGEN No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5