Automated reasoning methods for hardware and software analysis: design, integration and application
According to a study of the U.S. National Institute for Standards and Technologies (NIST Planning Report 02-3, The Economic Impacts of Inadequate Infrastructure for Software Testing, June 2002, http://www.nist.gov/director/prog-ofc/report02-3.pdf) software errors cost the U.S. economy an estimated $59.9 billion annually, of which 60% are borne by software users and the remainder by developers and vendors. That study also found that under reasonable assumptions more than one third of these costs could be eliminated by techniques that enable earlier and more effective identification and removal of software defects. The comparable situation suggests that costs and potential savings are of the same order of magnitude in the E.U. To mention two most dramatic instances, software errors affect computer security and computer availability. Hackers break into computers or crash them by attacking vulnerabilities in services offered by almost any networked computer. Internet worms and viruses exploit buffer overflows in web or mail servers. Since so many endeavours rely on computer software, any improvement to the quality/cost curve for software would have a broad social and economic impact.
Contemporary software artifacts pose new challenges, because they involve infinite state systems, distributed applications and non-functional properties. Approaches to quality assurance are either process or evidence based and the mainstay of evidence-based techniques is testing. Testing is expensive and cannot be exhaustive. Complementary to testing are analytic techniques, with static analysis as forerunner. Most analytic techniques are semi-automated and require users' training. While all these approaches are complementary, the importance of the problem demands to investigate automated techniques, that may lead to push-button tools.
The success of model checking for hardware verification suggested to study software model checking. However, software model checking is considerably harder, because software is inherently infinite-state in most cases. Furthermore, hardware analysis itself is far from being a solved problem. The increasing complexity of hardware designs and the transfer of functionalities from software to hardware blur the boundaries between software and hardware analyses and their respective requirements.
A reason of the successes of hardware model checking has been the possibility of reducing the problem to instances of SAT and run very efficient SAT solvers to solve it. From this perspective, a major source of difficulty with software model checking and more advanced hardware model checking is that reasoning in propositional logic is not sufficient. The problems that arise require automated reasoning in arithmetic, in equational logic, in fragments of first-order logic.
The thrust of this project is to enhance automated reasoning for software and hardware analysis, by defining, designing and implementing decision procedures for theories, such as equality, linear arithmetic, data structures and their combinations. Such procedures ought to be:
* easy to build
* easy to combine
* proof-producing
* efficient
* scalable
* incremental
and integrated into verification tools.
Since no single technique can satisfy all desiderata, the project leverages on state-of-the-art technologies in SAT-solving, theorem proving, bounded model checking and software model checking, to build decision procedures and embed them in new model checkers.
The project's team has the right blend of expertise to meet these challenges:
* Theorem proving, equational decision procedures (UNIVR)
* SAT-solving, SAT-based and equational decision procedures, software model checking (UNIGE)
* Software model checking (UNINA)
* SAT-based decision procedures, model checking (UNITN)
This project will further a new generation of model checking tools, that combine the speed of SAT-solving with the expressivity required by advanced software and hardware analyses.