Quality

Quality

Webinar Replay: How to Estimate Reliability for On-Time Software Development Webinar

How to Estimate Reliability for On-Time Software Development Webinar

If you were unable to attend our recent webinar, "How to Estimate Reliability for On-Time Software Development," a replay is now available.

Software development is a major investment area for thousands of organizations worldwide. The negotiation and early planning meetings often revolve around major cost and schedule decisions. But one of the most important factors, reliability, often gets left behind in these early discussions. This is unfortunate since early reliability estimates can help ensure that a quality product is delivered and predict if it will finish on-time and within budget. In this webinar, Keith Ciocco shows how to leverage the QSM model-based tools to estimate and track the important reliability numbers along with cost, scope, and schedule.

This presentation includes a lively Q&A session with the audience and covers such topics as:

Blog Post Categories 
Webinars Software Reliability Quality

How Does Agile Quality Compare?

During a recent consulting engagement, a customer asked if the QSM defect discovery model applied to Agile projects.  Of course, the best (and only) way to determine this was empirically.  From our database we extracted a sample of business IT projects that had completed since 2013 that recorded pre-implementation defects.  81 of these projects were Agile and 354 did not specify Agile as their development methodology.  We created average trend lines for both datasets and they displayed very similar patterns that conformed to the QSM defect discovery model.  This allowed us to answer our customer’s question affirmatively.

Agile Quality

Having a large project sample at hand and being curious, we decided to compare these metrics:

  • Mean time to defect (which measures the average time a system runs defect-free in the first month after implementation)
  • Average development time in months
  • Staffing
  • Cost/effort

Agile Quality

In a nutshell, the Agile and non-Agile projects used very similar staff sizes.  The Agile projects completed sooner and expended slightly less effort.  Quality was where the two project sets differed significantly.  Pre-implementation, Agile projects recorded fewer defects than non-Agile ones.  However, post-implementation the non-Agile projects operated longer between discovering defects in production than did Agile projects.

Agile Quality

Blog Post Categories 
Agile Quality

How Can I Tell When My Software will be Reliable Enough to Deliver?

Usually when I am online making a payment or using social media, I am not thinking about software quality. But lately I feel like I have been encountering more bugs than usual.  From activities like clicking on a link where I should be able to input my payment information, to doing a search and receiving an error message, or being redirected to a completely different page which had nothing to do with the mission I had set out to accomplish.  These bugs are sometimes frustrating and I started to wonder what could have been done to prevent these from being released into production.

Since I spend a lot of time speaking with people that manage software projects, I have noticed that quality is often one of the most overlooked aspects of a software system. People I’ve spoken with have mentioned that quality is often not even discussed during the early planning stages of development projects, but it is usually a deciding factor when the software is ready to be released and should be considered from the beginning of the project.

Using a tool like SLIM early in the planning stages of a project can help us with these issues. Not only can it provide reliable cost and schedule estimates, but it can also help estimate how many defects one can expect to find between system test and actual delivery. It can also estimate the Mean Time to Defect (MTTD), which is the amount of time between errors discovered.

Software Defect Tracking

Agile On-Time, But Is It Reliable?

With agile projects, we hear a lot about the planning benefits of having a fixed number of people with a fixed number of sprints.  All great stuff when it comes to finishing on time and within budget. But one of the things we also need to focus on is the quality of the software.  We often hear stories about functionality getting put on hold because of reliability goals not being met.

There are some agile estimation models available to help with this, and they can provide this information at the release level, before the project starts or during those early sprints. They provide this information by leveraging historical data along with time-tested forecasting models that are built to support agile projects. 

In the first view, you can see the estimate for the number of defects remaining. This is a big picture view of the overall release. Product managers and anyone concerned with client satisfaction can use these models to predict when the software will be reliable enough for delivery to the customer.

MTTD over Time

In the second view, you can see the total MTTD (Mean Time to Defect) and the MTTD by severity level. The MTTD is the amount of time that elapses between discovered defects. Each chart shows the months progressing on the horizontal axis and the MTTD (in days) improve over time on the vertical axis. 

Mean Time to Defect

Blog Post Categories 
Agile Quality Estimation Software Reliability

AI and Automation Make Software Reliability More Important Than Ever

AI and Automation Software Reliability

This post was originally published on Linkedin. Join the QSM Linkedin Group and Company Page to stay up-to-date with more content like this.

If you were thinking about purchasing a driverless car, and the salesperson told you that there’s a “slight” chance that the car will fail during transit, would you still feel comfortable laying down your money? Or, if you faced an emergency, would you trust an automated robot to perform open-heart surgery, rather than the hands of a skilled physician?

While these questions might seem like the stuff of a science fiction novel, they’re quickly becoming a part of our normal, everyday world. We’re hearing a great deal about artificial intelligence and how it is replacing tasks that were once done by humans. AI is powered by software, and that software is becoming increasingly vital to our lives. This makes ensuring its reliability more important than ever.

But here’s a sobering thought: right now, IT operations teams are building software that is, on average, 95% reliable out the door. That’s right; today, a 5% unreliability gap is considered “good enough.”

Blog Post Categories 
Estimation Quality

New Article on InfoQ - Understanding Quality and Reliability

Understanding Quality and Reliability

QSM's C. Taylor Putnam-Majarian and Doug Putnam recently published an article, Understanding Quality and Reliability, on InfoQ.

One of the most overlooked but important areas of software estimation, measurement, and assessment, is quality. It often is not considered or even discussed during the early planning stages of all development projects, but it’s almost always the ultimate criteria for when a product is ready to ship or deploy. Therefore, it needs to be part of the expectation-setting conversation from the outset of the project. So, how can we talk about product quality? It can be measured a number of ways, but two in particular give excellent insights into the stability of the product.

Read the full article on InfoQ!

Blog Post Categories 
Articles Quality

New Article - Forecasting from Defect Signals

Forecasting from Defect Signals

On large software development and acquisition programs, testing phases typically extend over many months. It is important to forecast the quality of the software at that future time when the schedule calls for testing to be complete. In this article, originally published in CrossTalk, Paul Below shows how Walter Shewhart’s Control Charts can be applied to this purpose, in order to detect a signal that indicates a significant change in the state of the software. This signal detection is then used to improve mapping of project progress to forecast curves and thereby improve estimates of project schedule.

Blog Post Categories 
Defects Articles Quality

Managing Project Risk through Early Defect Detection

Managing Software Project RiskWith the most recent spurt of inclement weather, there is really no denying that winter is here.  After awaking to about 4 inches of snow accumulation, I begrudgingly bundled myself up in my warmest winter gear and proceeded to dig out my car.  Perhaps the brisk air woke me up faster than usual because as I dug a path to the car, I began to think about software testing, specifically how effective early testing can reduce the chances of schedule slippages and cost overruns.  Allow me to explain.

Being an eternal optimist, I was grateful that the snow I was shoveling and later brushing off my car was light and powdery.  Despite the frigid temperature and large quantity of snow, I realized that it was good that I had decided to complete this task first thing in the morning.  At the time the snow was relatively easy to clear, and had I waited until the afternoon, the sun would have melted enough of the snow to make this task significantly more difficult and time consuming.

New Article: Data-Driven Estimation, Management Lead to High Quality

Software projects devote enormous amounts of time and money to quality assurance. It's a difficult task, considering most QA work is remedial in nature - it can correct problems that arise long before the requirements are complete or the first line of code has been written, but has little chance of preventing defects from being created in the first place. By the time the first bugs are discovered, many projects are already locked into a fixed scope, staffing, and schedule that do not account for the complex and nonlinear relationships between size, effort, and defects. 

At this point, these projects are doomed to fail, but disasters like these can be avoided. When armed with the right information, managers can graphically demonstrate the tradeoffs between time to market, cost, and quality, and negotiate achievable deadlines and budgets that reflect their management goals. 

Leveraging historical data from the QSM Database, QSM Research Director Kate Armel equips professionals with a replicable, data-driven framework for future project decision-making in an article recently published in Software Quality Professional

Read the full article here.

Blog Post Categories 
Articles Data Quality

They Just Don't Make Software Like They Used to… Or do they?

With the release of SLIM-Suite 8.1 quickly approaching, I thought I’d take a moment to share a preview of the updated QSM Default Trend Lines and how it affects your estimates.  In this post I wanted to focus on the differences in quality and reliability between 2010 and 2013 for the projects in our database.  Since our last database update, we’ve included over 200 new projects in our trend groups.

Here are the breakouts of the percent increases in the number of projects by Application Type:

  • Business Systems: 14%
  • Engineering Systems: 63%
  • Real Time Systems: 144%

Below you will find an infographic outlining some of the differences in quality between 2010 and 2013.

Changes in Software Project Quality between 2010 and 2013

From the set of charts above, we can see some trends emerging which could indicate the changes in quality between 2010 and 2013.  By looking at the data, it’s apparent that two distinct stories are being told:

1. The Quality of Engineering Systems has Increased

Blog Post Categories 
Software Reliability Quality