Exploding Software-Engineering Myths

A great article about some hefty & useful analysis! Here are the major take-aways as I saw them:

Nachi Nagappan

Nachi Nagappan, a Principal Researcher at Microsoft Research.

For Nachi Nagappan, a senior researcher at Microsoft Research Redmond with the Empirical Software Engineering and Measurement Research Group (ESM):

I found then that many of the beliefs I had in university about software engineering were actually not that true in real life.

 

  • Code coverage measures how comprehensively a piece of code has been tested; if a program contains 100 lines of code and the quality-assurance process tests 95 lines, the effective code coverage is 95 percent.
    • What Nagappan and his colleagues saw was that, contrary to what is taught in academia, higher code coverage was not the best measure of post-release failures in the field.
  • In TDD, programmers first write the test code, then the actual source code, which should pass the test. This is the opposite of conventional development, in which writing the source code comes first, followed by writing unit tests. TDD adherents claim the practice produces better design and higher quality code.
    • What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.
      “Over a development cycle of 12 months, 35 percent is another four months, which is huge,” Nagappan says. “However, the tradeoff is that you reduce post-release maintenance costs significantly, since code quality is so much better.
  • In software development, assertions are contracts or ingredients in code, often written as annotations in the source-code text, describing what the system should do rather than how to do it.
    • The team observed a definite negative correlation: more assertions and code verifications means fewer bugs. Looking behind the straight statistical evidence, they also found a contextual variable: experience. Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results.
  • Conway’s Law can be paraphrased as, “If there are N product groups, the result will be a software system that to a large degree contains N versions or N components.” In other words, the system will resemble the organization building the system. The team took the entire tree structure of the Windows group as an example. They took into account reporting structure but also degrees of separation between engineers working on the same project, the level to which ownership of a code base rolled up, the number of groups contributing to the project, and other metrics developed for this study.
  • One of the most cherished beliefs in software project management is that a distributed-development model has a negative impact on software quality because of problems with communication, coordination, culture, and other factors. The study looked for statistical evidence that components developed by distributed teams resulted in software with more errors than components developed by collocated teams.
    • The researchers found that the differences were statistically negligible. One note: Most people preferred to talk to someone from their own organization 4,000 miles away rather than someone only five doors down the hall but from a different organization. Organizational cohesiveness played a bigger role than geographical distance.

 

SOURCE: Exploding Software-Engineering Myths