When, How, and Why Developers (Do Not) Test
How much should we test? And when should we stop testing? Since the dawn of software testing, these questions have tormented developers. But before we are able to answer how much we should test, we must first know how much we are testing.
In this talk, I am going to report the surprising findings of a large-scale case study on the state of developer testing, facilitated by the purpose-built Eclipse plugin WatchDog. The open-ended case study launched after last year’s EclipseCon and has involved more than 1,500 software developers to date, resulting in over 15 years of recorded and analyzed work time in the IDE.
Our findings question several commonly shared assumptions about testing and might be contributing factors to the observed bug proneness of software in practice: the majority of developers in our study does not test; developers rarely run their tests in the IDE; Test-Driven Development (TDD) is not widely practiced; and, last but not least, software developers only spend a quarter of their work time engineering tests, whereas they think they test half of their time.