Federated and distributed inference leveraging sensing and communication in the...
Federated and distributed inference leveraging sensing and communication in the computing continuum
The integration of sensing and communication is attracting a fervent research activity and will result in a myriad of contextual data that, if properly processed, may enable a better understanding of local and global phenomena whi...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Información proyecto FIND-OUT
Duración del proyecto: 61 meses
Fecha Inicio: 2023-05-04
Fecha Fin: 2028-06-30
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
The integration of sensing and communication is attracting a fervent research activity and will result in a myriad of contextual data that, if properly processed, may enable a better understanding of local and global phenomena while increasing the quality, security, and efficiency of our ecosystems. The computing continuum offers a timely and unique solution for processing such a massive volume of sensed data, as it provides virtually unlimited and widely distributed computing resources. Nevertheless, the deployment of data analysis at the edge or in the cloud has many implications regarding latency, privacy, security, and data integrity. As we learn how to sense ubiquitously and we build a tool able to handle the sensed data, the greatest challenge is to understand how and where to process them. The purpose of this project is the development of a pioneering framework to guide the design of federated and distributed inference systems, leveraging sensing and communication and harnessing the computing continuum. The framework will build on: (i) the definition of statistical and mathematical models for the sensed data, which capture the complex and interrelated phenomena underpinning sensing and communication systems, with different levels of integration; (ii) the development of cloud-native inference algorithms, mainly distributed and parallelized, with scalable complexity that can be adapted to dynamic performance requirements; (iii) the design of orchestration strategies to guide the flexible deployment of the inference process at the edge and in the cloud with a dynamic allocation of the computing resources. The aim is to overcome the paradigmatic accuracy-complexity trade-off that has driven distributed inference for decades, leading to a paradigm shift that encompasses multi-level performance indicators beyond accuracy, including latency, integrity, privacy, and security aspects and how these impact the confidence on the inferred phenomena.