Probabilistic Formal Verification for Provably Trustworthy AI
This project is concerned with the formal verification of modern Artificial Intelligence (AI) systems with Machine Learning (ML) components. Certifying that an AI satisfies certain requirements, such as fairness or safety standard...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
AutoFair
Human-Compatible Artificial Intelligence with Guarantees
3M€
Cerrado
TAILOR
Foundations of Trustworthy AI Integrating Reasoning Learn...
12M€
Cerrado
PID2021-122916NB-I00
INTELIGENCIA ARTIFICIAL EXPLICABLE PARA TOMA DE DECISIONES C...
260K€
Cerrado
VeriDeL
Verifiably Safe and Correct Deep Neural Networks
2M€
Cerrado
PID2019-106758GB-C33
APRENDIZAJE AUTOMATICO EXPLICABLE: UNA APROXIMACION PROBABIL...
109K€
Cerrado
SPATIAL
Security and Privacy Accountable Technology Innovations Alg...
5M€
Cerrado
Información proyecto PFV-4-PTAI
Duración del proyecto: 23 meses
Fecha Inicio: 2024-03-01
Fecha Fin: 2026-02-28
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
This project is concerned with the formal verification of modern Artificial Intelligence (AI) systems with Machine Learning (ML) components. Certifying that an AI satisfies certain requirements, such as fairness or safety standards, is pivotal for the regulation and use of this technology in many domains, especially those with high socio-economical stakes. Techniques that can provide formal guarantees on modern AI will have broad impacts on multiple areas described in the Horizon Europe strategic plan 2021-2024. The proposed framework goes beyond the state-of-the-art by adopting a probabilistic approach that satisfies three desiderata. First, it supports arbitrarily complex distributions, handling uncertainty over both non-deterministic systems and/or complex, multidimensional enviroments. Second, it unifies under the same formalism the verification of a multitude of ML models and properties of interest. Third, it enables the verification of ML models as part of a larger system and promises an easier integration into the existing probabilistic formal verification (PFV) tools. The approach is based on the notion of Weighted Model Integration, a recent formalism that enable probabilistic inference over arbitrary combination of logical theories and algebraic constraints.Paolo Morettin is one of the most prolific authors in the novel but vibrant field of WMI. Having both industrial experience in formal verification and a ML-oriented scientific background, he is the ideal candidate for pushing the boundaries of WMI-based probabilistic formal verification. As a MSCA postdoctoral fellow, Paolo Morettin will advance the state-of-the-art with both theoretical and technological contributions, with the ultimate goal of enabling and facilitating the integration of the proposed framework into the existing PFV tools. At the same time, he will develop a highly valuable multidisciplinary ML/FV background, enhancing his career perspectives.