Descripción del proyecto
Deploying algorithmic solutions in real-world applications raises two challenges. First, we need easy-to-use and universal algorithms.
Second, we need to guarantee that algorithmic solutions can be understood by people using them. We address the first of these
challenges in TUgbOAT project, which aims to deliver unified algorithmic tools. Here, we propose to develop tools that would address
the second of these challenges.
In many use scenarios, algorithms propose a solution to a human operator. The main challenge in such cases is to convince him to use
the returned solution. Traditionally we think of algorithms in a black-box manner, i.e., as a tool to find a good solution. We do not
expect algorithms to give a human-understandable explanation of why this is the best solution, or what alternatives exist or what are
the bottlenecks. Nevertheless, we humans still tend to ask these questions even if we understand the algorithms that are used.
Currently, we lack good tools that could explain the results of optimization algorithms, e.g., for the assignment problem.
For practitioners, like ourselves, that work together with companies to deploy algorithmic solutions in real-world cases, the need to
provide explainable algorithms becomes immanent. Here we will test and implement results developed in TUgbOAT that can be used
to complement the algorithms with human explanations. In particular, we plan to:
- enrich algorithms to give meaningful alternative solutions,
- apply Shapley value methods to determine key solution elements,
- work with perturbed inputs to create robust and more concise solutions,
- generate concise decision trees that would explain steps taken by algorithms.
This project aims to deliver the base parts of a software library that would give explainable algorithms. We plan to concentrate on the
task assignment problem (i.e., matchings) where we already cooperate with companies.