Descripción del proyecto
We trust computers with our lives: they steer our aircrafts, control cardiac pacemakers, insulin pumps, ventilators, and other medical devices, and will soon drive our cars.
All these software systems are safety critical: an error in the software may lead to a catastrophe such as the loss of life or the explosion of a rocket.
Unfortunately, safety-critical systems have been plagued by fatal software failures from their early days until today.
In computer science, verification is the discipline of constructing software and systems that are correct by design.
It proves that a computer system works correctly under all given circumstances with mathematical rigor.
As real software is far too complex to be analyzed by hand, today, verification itself uses computers to get the job done, i.e. it is understood to be computer-aided verification.
With this, the following question immediately arises:
If we use a piece of software, say, the verifier, to guarantee that another computer system works correctly, how can we trust the result of the verification process if the verifier itself could have a bug?
In my opinion, this is an extremely important question to answer if we are serious about applying computer-aided verification to real-life safety-critical systems.
However, surprisingly little has been done to answer it.
The goal of Certywhere is to significantly advance the state-of-the-art on this question by marrying the two popular verification methods of automated model checking and interactive theorem proving by means of certification.
In this approach, model checkers produce certificates, which can be efficiently checked against the model and formula by an independent formally verified certifier.
I want to apply the certification approach to a large range of model checking methods from the important areas of symbolic model checking and partial-order reduction.
In particular, I want to target timed automata, which are a popular formalism for verifying real-time systems.