Submitted: January 2015

Abstract

As software is becoming a more and more important part in many safety-critical devices (such as planes, medical devices, autonomous cars,…), the prevention of software bugs in such systems is a crucial part of the development.

A way to detect bugs and verify the behavior of a system is statistical model checking (SMC). In this approach, an abstract representation of the real world system is created in the formalism of a mathematically defined model language. On the base of this model, the system is simulated multiple times and the properties that are to be tested are observed for each of these simulation runs. The collected observations are then evaluated by stochastic algorithms, so that probabilistic guarantees can be made over the observed properties. In order to get precise results, many collected observations are required which can take a long time to compute. But since the simulation runs are independent from each other, the workload can be distributed on multiple machines generating observations.

In this thesis, a distributed form of statistical model checking is implemented as an extension of the statistical model checker modes, which is a part of the Modest Toolset. Since different run lengths of the simulation runs can introduce a bias in the simulation, we implement different methods to prevent this bias.

Additionally, we conduct an evaluation where we observe the behavior of the implementation in regard to the performance under the bias preventing methods and other parameters.