Model checking for the working man
Report written by Steven van Deursen. Thanks!
Soon after start, I was disappointed to discover that the workshop didn't in any way cover fashion super models. Instead the workshop covered tools for developing mathematical proofs for small parts of our software. After recovering from this setback (and after finishing the Wiener schnitzel and French fries provided to us by Bol.com) we opened our laptops for the workshop.
The biggest I took from the workshop was that modeling parts of your software and proving it correct is not something that is purely academic. The tools are freely available and normal developers can learn these tools in a fairly short amount of time. In the workshop we used the tool TLA+ to prove a small piece of code to be thread-safe.
A big plus for me was that Rico was a working man, just like the rest of us. While faced with a multi-threading issue at work, he decided to dive into the world of modeling to see whether a tool could help verifying the correctness of his code. Without any former training and experience he managed to solve that particular issue within a day, which is quite impressive.
Even more impressive is that he did this with the TLA+ tool, because the tool was clearly not built for anything outside the world of academics. The interface was quirky and finding out why the checker reported our model to be invalid gave me a headache!
The workshop ended with an intriguing discussion between Rico and the participants on whether model checking is actually helpful or not. The main objections were that it was quite tricky to specify a model in such way that it actually correctly resembles the real running code. Converting our code into the model is a delicate matter and can easily go wrong. And after proving our model, converting it back into running code can also go wrong. Besides this, our systems are optimized using a certain memory model, which might be an important to consider while defining the model. Very few developers are really aware of the exact details of such a memory model and reordering of instructions at the compiler and processor level, mixed with latency in processor caching can make it really hard to define a model that even remotely resembles reality. An attendee noted that because of this model checking could even lead to a false sense of security; what does it mean to prove a model is correct while having no way to prove that the model correctly represents the code?
I enjoyed the workshop and Rico's has a very pleasant way of presenting. Rico gave me food for thought, but in the end, Rico wasn't able to convince me to try model checking. This was due to a combination of the TLA experience mixed with the identified risks of inaccurately specifying the model.
Let’s hope that in the near future we see improved tooling and integration of tools into our normal development environments so that real code can be proven. When that happens we can truly say that model checking has become feasible for the working man.
Did you miss the workshop? All material is available online! Model checking for the Working Man (m/f)
There you have it: Devnology predicts 2027. Thanks all for sticking with us over the years! https://t.co/pMmr5PCztx
19-04-2017 at 20:01
Nerds talking the last 8 years, and making predictions for the next. https://t.co/VGT8XIFQ0j
19-04-2017 at 18:37