Innovating Works

VeriDeL

Financiado
Verifiably Safe and Correct Deep Neural Networks
Deep machine learning is revolutionizing computer science. Instead of manually creating complex software, engineers now use automatically generated deep neural networks (DNNs) in critical financial, medical and transportation syst... Deep machine learning is revolutionizing computer science. Instead of manually creating complex software, engineers now use automatically generated deep neural networks (DNNs) in critical financial, medical and transportation systems, obtaining previously unimaginable results. Despite their remarkable achievements, DNNs remain opaque. We do not understand their decision making and cannot prove their correctness - thus risking potentially devastating outcomes. For example, it has been shown that DNNs that navigate autonomous aircraft with the goal of avoiding collisions could produce incorrect turning advisories. Thus, the lack of formal guarantees regarding DNN behavior is preventing their safe deployment in critical systems, and could jeopardize human lives. Consequently, there is a crucial need to ensure that DNNs operate correctly. Recent and exciting developments in formal verification allow us to automatically reason about DNNs. However, this is a nascent technology, which currently only scales to medium-sized DNNs - whereas real-world systems are much larger. Additionally, it is unclear how to apply this technology in practice. I propose to bridge this crucial gap through the development of novel, scalable and groundbreaking techniques for verifying the correctness of large DNNs, and by applying them to real systems of interest. I will do this by (1) developing search-space pruning techniques, which will enable us to verify larger DNNs; (2) creating novel abstraction-refinement techniques, which will allow us to scale to even larger DNNs; and (3) identifying new kinds of relevant specifications and key domains where DNNs are used, demonstrating the verification of real-world DNNs. This project will result in a sound and expressive framework for automatically reasoning about DNNs, orders of magnitude larger than is possible today. This framework will ensure the safety and correctness of DNNs deployed in critical systems, greatly benefiting users and society. ver más
31/10/2028
2M€
Duración del proyecto: 60 meses Fecha Inicio: 2023-10-25
Fecha Fin: 2028-10-31

Línea de financiación: concedida

El organismo HORIZON EUROPE notifico la concesión del proyecto el día 2023-10-25
Línea de financiación objetivo El proyecto se financió a través de la siguiente ayuda:
ERC-2023-STG: ERC STARTING GRANTS
Cerrada hace 2 años
Presupuesto El presupuesto total del proyecto asciende a 2M€
Líder del proyecto
THE HEBREW UNIVERSITY OF JERUSALEM No se ha especificado una descripción o un objeto social para esta compañía.
Perfil tecnológico TRL 4-5