Software Reliability

Software Reliability

Agile On-Time, But Is It Reliable?

With agile projects, we hear a lot about the planning benefits of having a fixed number of people with a fixed number of sprints.  All great stuff when it comes to finishing on time and within budget. But one of the things we also need to focus on is the quality of the software.  We often hear stories about functionality getting put on hold because of reliability goals not being met.

There are some agile estimation models available to help with this, and they can provide this information at the release level, before the project starts or during those early sprints. They provide this information by leveraging historical data along with time-tested forecasting models that are built to support agile projects. 

In the first view, you can see the estimate for the number of defects remaining. This is a big picture view of the overall release. Product managers and anyone concerned with client satisfaction can use these models to predict when the software will be reliable enough for delivery to the customer.

MTTD over Time

In the second view, you can see the total MTTD (Mean Time to Defect) and the MTTD by severity level. The MTTD is the amount of time that elapses between discovered defects. Each chart shows the months progressing on the horizontal axis and the MTTD (in days) improve over time on the vertical axis. 

Mean Time to Defect

Blog Post Categories 
Agile Quality Estimation Software Reliability

Managing Project Risk through Early Defect Detection

Managing Software Project RiskWith the most recent spurt of inclement weather, there is really no denying that winter is here.  After awaking to about 4 inches of snow accumulation, I begrudgingly bundled myself up in my warmest winter gear and proceeded to dig out my car.  Perhaps the brisk air woke me up faster than usual because as I dug a path to the car, I began to think about software testing, specifically how effective early testing can reduce the chances of schedule slippages and cost overruns.  Allow me to explain.

Being an eternal optimist, I was grateful that the snow I was shoveling and later brushing off my car was light and powdery.  Despite the frigid temperature and large quantity of snow, I realized that it was good that I had decided to complete this task first thing in the morning.  At the time the snow was relatively easy to clear, and had I waited until the afternoon, the sun would have melted enough of the snow to make this task significantly more difficult and time consuming.

They Just Don't Make Software Like They Used to… Or do they?

With the release of SLIM-Suite 8.1 quickly approaching, I thought I’d take a moment to share a preview of the updated QSM Default Trend Lines and how it affects your estimates.  In this post I wanted to focus on the differences in quality and reliability between 2010 and 2013 for the projects in our database.  Since our last database update, we’ve included over 200 new projects in our trend groups.

Here are the breakouts of the percent increases in the number of projects by Application Type:

  • Business Systems: 14%
  • Engineering Systems: 63%
  • Real Time Systems: 144%

Below you will find an infographic outlining some of the differences in quality between 2010 and 2013.

Changes in Software Project Quality between 2010 and 2013

From the set of charts above, we can see some trends emerging which could indicate the changes in quality between 2010 and 2013.  By looking at the data, it’s apparent that two distinct stories are being told:

1. The Quality of Engineering Systems has Increased

Blog Post Categories 
Software Reliability Quality

Finding Defects Efficiently

Several weeks ago I read an interesting study on finding bugs in giant software programs:

The efficiency of software development projects is largely determined by the way coders spot and correct errors.

But identifying bugs efficiently can be a tricky business, when the various components of a program can contain millions of lines of code. Now Michele Marchesi from the University of Calgiari and a few pals have come up with a deceptively simple way of efficiently allocating resources to error correction.

...Marchesi and pals have analysed a database of java programs called Eclipse and found that the size of these programs follows a log normal distribution. In other words, the database and by extension, any large project, is made up of lots of small programs but only a few big ones.

So how are errors distributed among these programs? It would be easy to assume that the errors are evenly distributed per 1000 lines of code, regardless of the size of the program.

Not so say Marchesi and co. Their study of the Eclipse database indicates that errors are much more likely in big programs. In fact, in their study, the top 20 per cent of the largest programs contained over 60 per cent of the bugs.

That points to a clear strategy for identifying the most errors as quickly as possible in a software project: just focus on the biggest programs.

Nicole Tedesco adds her thoughts:

Blog Post Categories 
Defects Testing Software Reliability