The Power of Randomization in Uncertain Environments
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assump...
ver más
¿Tienes un proyecto y buscas un partner? Gracias a nuestro motor inteligente podemos recomendarte los mejores socios y ponerte en contacto con ellos. Te lo explicamos en este video
Proyectos interesantes
TIN2012-37954
ALGORITMOS DE APRENDIZAJE COMPUTACIONAL EN ENTORNOS DISTRIBU...
135K€
Cerrado
BES-2013-063659
ALGORITMOS DE APRENDIZAJE COMPUTACIONAL EN ENTORNOS DISTRIBU...
84K€
Cerrado
PROXNET
Modelling Complex Networks Through Graph Editing Problems
208K€
Cerrado
TEC2011-23113
DESARROLLO E IMPLEMENTACION DE SISTEMAS DE COMPUTACION DE MU...
67K€
Cerrado
NACS
New Approaches to Counting and Sampling
1M€
Cerrado
LTSPD
Learning and Testing Structured Probability Distributions
100K€
Cerrado
Información proyecto UncertainENV
Duración del proyecto: 85 meses
Fecha Inicio: 2018-08-10
Fecha Fin: 2025-09-30
Líder del proyecto
TEL AVIV UNIVERSITY
No se ha especificado una descripción o un objeto social para esta compañía.
TRL
4-5
Presupuesto del proyecto
2M€
Fecha límite de participación
Sin fecha límite de participación.
Descripción del proyecto
Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g., social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.
Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.
The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.
In this proposal I will investigate the power of randomization in uncertain environments.
I will focus on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.