Quantitative Software Management (QSM) consultant, James Heires, recently discussed the benefits of estimating and forecasting software reliability at RAMS (Reliability & Maintainability Symposium) 2023. Theme for the conference: "Artificial Intelligence and Machine Learning (AI/ML) application to our R&M tools, techniques, and processes (and products) promises speed and scale.... When program management instantiates advanced techniques into R&M engineering activities, such as digital design and machine learning and other advanced analytics, it enables products to evolve at a much more proactive, effective, and cost-efficient approach. Ultimately it facilitates increased speed to market, adoption of new technology, and especially for repairable systems, products that are more reliable, maintainable, and supportable."
If you were unable to attend our recent webinar, "How to Estimate Reliability for On-Time Software Development," a replay is now available.
Software development is a major investment area for thousands of organizations worldwide. The negotiation and early planning meetings often revolve around major cost and schedule decisions. But one of the most important factors, reliability, often gets left behind in these early discussions. This is unfortunate since early reliability estimates can help ensure that a quality product is delivered and predict if it will finish on-time and within budget. In this webinar, Keith Ciocco shows how to leverage the QSM model-based tools to estimate and track the important reliability numbers along with cost, scope, and schedule.
This presentation includes a lively Q&A session with the audience and covers such topics as:
On Wednesday, Dec. 8 at 1:00 PM EST, Keith Ciocco will present "How to Estimate Reliability for On-Time Software Development."
Software development is a major investment area for thousands of organizations worldwide. The negotiation and early planning meetings often revolve around major cost and schedule decisions. But one of the most important factors, reliability, often gets left behind in these early discussions. This is unfortunate since early reliability estimates can help ensure that a quality product is delivered and predict if it will finish on-time and within budget. In this webinar, Keith Ciocco will be showing how to leverage the QSM model-based tools to estimate and track the important reliability numbers along with cost, scope, and schedule.
Keith Ciocco has more than 30 years of experience working in sales and customer service, with 25 of those years spent with QSM. As Vice President, his primary responsibilities include supporting QSM clients with their estimation and measurement goals, managing business development and existing client relations. He has developed and directed the implementation of the sales and customer retention process within QSM and has played a leading role in communicating the value of the QSM tools and services to professionals in the software development, engineering and IT industries.
Although the software industry is known for growth and change, one thing has remained constant: the struggle to reduce cost, improve time to market, increase quality and maintainability, and allocate resources most efficiently. So how can we combat future challenges in a world where everything is software, from the systems in your car to the thermostat in your home to the small computer in your pocket? By using practical measurement and metrics, we can get a bird's-eye view of where we've been and where we could go, while keeping us grounded in data. Leveraging QSM's industry database of over 13,000+ completed projects, Katie Costantini takes a high-level look at changes to software schedules, effort/cost, productivity, size, and reliability metrics from 1980 to 2019. The current study compares insights to similar studies QSM has completed at regular intervals over the past four decades and answers questions like, 'what is the "typical" project over time?' and 'why are projects "shrinking?"' The results may surprise you!
With agile projects, we hear a lot about the planning benefits of having a fixed number of people with a fixed number of sprints. All great stuff when it comes to finishing on time and within budget. But one of the things we also need to focus on is the quality of the software. We often hear stories about functionality getting put on hold because of reliability goals not being met.
There are some agile estimation models available to help with this, and they can provide this information at the release level, before the project starts or during those early sprints. They provide this information by leveraging historical data along with time-tested forecasting models that are built to support agile projects.
In the first view, you can see the estimate for the number of defects remaining. This is a big picture view of the overall release. Product managers and anyone concerned with client satisfaction can use these models to predict when the software will be reliable enough for delivery to the customer.
In the second view, you can see the total MTTD (Mean Time to Defect) and the MTTD by severity level. The MTTD is the amount of time that elapses between discovered defects. Each chart shows the months progressing on the horizontal axis and the MTTD (in days) improve over time on the vertical axis.
With the most recent spurt of inclement weather, there is really no denying that winter is here. After awaking to about 4 inches of snow accumulation, I begrudgingly bundled myself up in my warmest winter gear and proceeded to dig out my car. Perhaps the brisk air woke me up faster than usual because as I dug a path to the car, I began to think about software testing, specifically how effective early testing can reduce the chances of schedule slippages and cost overruns. Allow me to explain.
Being an eternal optimist, I was grateful that the snow I was shoveling and later brushing off my car was light and powdery. Despite the frigid temperature and large quantity of snow, I realized that it was good that I had decided to complete this task first thing in the morning. At the time the snow was relatively easy to clear, and had I waited until the afternoon, the sun would have melted enough of the snow to make this task significantly more difficult and time consuming.
With the release of SLIM-Suite 8.1 quickly approaching, I thought I’d take a moment to share a preview of the updated QSM Default Trend Lines and how it affects your estimates. In this post I wanted to focus on the differences in quality and reliability between 2010 and 2013 for the projects in our database. Since our last database update, we’ve included over 200 new projects in our trend groups.
Here are the breakouts of the percent increases in the number of projects by Application Type:
- Business Systems: 14%
- Engineering Systems: 63%
- Real Time Systems: 144%
Below you will find an infographic outlining some of the differences in quality between 2010 and 2013.
From the set of charts above, we can see some trends emerging which could indicate the changes in quality between 2010 and 2013. By looking at the data, it’s apparent that two distinct stories are being told:
1. The Quality of Engineering Systems has Increased
Several weeks ago I read an interesting study on finding bugs in giant software programs:
The efficiency of software development projects is largely determined by the way coders spot and correct errors.
But identifying bugs efficiently can be a tricky business, when the various components of a program can contain millions of lines of code. Now Michele Marchesi from the University of Calgiari and a few pals have come up with a deceptively simple way of efficiently allocating resources to error correction.
...Marchesi and pals have analysed a database of java programs called Eclipse and found that the size of these programs follows a log normal distribution. In other words, the database and by extension, any large project, is made up of lots of small programs but only a few big ones.
So how are errors distributed among these programs? It would be easy to assume that the errors are evenly distributed per 1000 lines of code, regardless of the size of the program.
Not so say Marchesi and co. Their study of the Eclipse database indicates that errors are much more likely in big programs. In fact, in their study, the top 20 per cent of the largest programs contained over 60 per cent of the bugs.
That points to a clear strategy for identifying the most errors as quickly as possible in a software project: just focus on the biggest programs.
Nicole Tedesco adds her thoughts: