Modern data analysis and optimization rely on our ability to process rapidly evolving dynamic datasets, often involving matrix operations in very high dimensions. Dynamic data structures enable fast information-retrieval on these...
ver más
Descripción del proyecto
Modern data analysis and optimization rely on our ability to process rapidly evolving dynamic datasets, often involving matrix operations in very high dimensions. Dynamic data structures enable fast information-retrieval on these huge databases by maintain- ing implicit information on the underlying data. As such, understanding the power and limitations of dynamic (matrix) data structures is a fundamental question in theory and practice. Despite decades of research, there are still very basic dynamic problems whose complexity is (exponen- tially) far from understood – Bridging this gap is one of the centerpieces of this proposal.
The second theme of this proposal is advancing the nascent role of dynamic data structures in continuous optimization. For over a century, the traditional focus of optimization research was on minimizing the rate of convergence of local-search methods. The last ∼3 years have witnessed the dramatic potential of dynamic data structures in reducing the cost-per-iteration of (Newton type) optimization algorithms, proclaiming that the bottleneck to accelerating literally thousands of algorithms, is efficient maintenance of dynamic matrix functions. This new framework is only at its early stages, but already led to breakthroughs on decade-old problems in computer science. This proposal will substantially develop this interdisciplinary theory, and identifies the mathematical machinery which would lead to ultra-fast first and second-order convex optimization.
In the non-convex setting, this proposal demonstrates the game-changing potential of dynamic data structures and algebraic sketching techniques in achieving scalable training and inference of deep neural networks, a major challenge of modern AI. Our program is based on a novel connection of Kernel methods and compressed sensing techniques for approximate matrix multiplication.
Seleccionando "Aceptar todas las cookies" acepta el uso de cookies para ayudarnos a brindarle una mejor experiencia de usuario y para analizar el uso del sitio web. Al hacer clic en "Ajustar tus preferencias" puede elegir qué cookies permitir. Solo las cookies esenciales son necesarias para el correcto funcionamiento de nuestro sitio web y no se pueden rechazar.
Cookie settings
Nuestro sitio web almacena cuatro tipos de cookies. En cualquier momento puede elegir qué cookies acepta y cuáles rechaza. Puede obtener más información sobre qué son las cookies y qué tipos de cookies almacenamos en nuestra Política de cookies.
Son necesarias por razones técnicas. Sin ellas, este sitio web podría no funcionar correctamente.
Son necesarias para una funcionalidad específica en el sitio web. Sin ellos, algunas características pueden estar deshabilitadas.
Nos permite analizar el uso del sitio web y mejorar la experiencia del visitante.
Nos permite personalizar su experiencia y enviarle contenido y ofertas relevantes, en este sitio web y en otros sitios web.