Defects that matter: Lessons from the Trenches
Bill Pugh, University of Maryland & Google
Except for limited, rare circumstances, software is never perfect. Large production scale software contains numerous defects/mistakes, and bug databases contain hundreds or thousands of open bug reports. Despite this, software mostly satisfies the needs of its users, many defects do not actually cause the application to significantly misbehave, and developers find that the pressure to develop and ship new software and features is as strong as the pressure to lower the number of defects in the software.
This basic point isn't appreciated by much of the research in the field of software defect detection. The key question is not whether or not a tool can find a potential mistake or defect software. Rather, the question is whether having developers spend time using a new tool or technique, rather than spending that time on other software quality efforts, will result in a net improvement in the quality and timeliness of the software.
The key using tools to improve software quality is to understand the potential cost of different kinds of defects, the cost of using a tool to find those defects, and the cost and ability of other software quality techniques (e.g., testing) to find those defects instead. All of these questions/issues vary by project/situation.
As part of the talk, I'll summarize the May 2009 FindBugs fixit in which 700 engineers at Google looked at 4,000 FindBugs warnings on Google's java codebase. 300 of the engineers supplied a total of more than 9,000 classifications of issues, and more than 80% of the classifications were should fix or must fix. More than 1,500 of the issues were removed from Google's code base over several days. I'll talk about how we designed and conducted the fixit, and what we expected and learned about cost effective removal of defects at Google.
To find out more about the creator of FindBugs and five-time JavaOne rock star, visit Bill Pugh's homepage.