Practical Software Measurement
Software projects devote enormous amounts of time and money to quality assurance. It's a difficult task, considering most QA work is remedial in nature - it can correct problems that arise long before the requirements are complete or the first line of code has been written, but has little chance of preventing defects from being created in the first place. By the time the first bugs are discovered, many projects are already locked into a fixed scope, staffing, and schedule that do not account for the complex and nonlinear relationships between size, effort, and defects.
At this point, these projects are doomed to fail, but disasters like these can be avoided. When armed with the right information, managers can graphically demonstrate the tradeoffs between time to market, cost, and quality, and negotiate achievable deadlines and budgets that reflect their management goals.
Recently I conducted a study on projects sized in function points that covers projects put into production from 1990 to the present, with a focus on ones completed since 2000. For an analyst like myself, one of the fun things about a study like this is that you can identify trends and then consider possible explanations for why they are occurring. A notable trend from this study of over 2000 projects is that productivity, whether measured in function points per person month (FP/PM) or hours per function point, is about half of what it was in the 1990 to 1994 time frame.
With the release of SLIM-Suite 8.1 quickly approaching, I thought I’d take a moment to share a preview of the updated QSM Default Trend Lines and how it affects your estimates. In this post I wanted to focus on the differences in quality and reliability between 2010 and 2013 for the projects in our database. Since our last database update, we’ve included over 200 new projects in our trend groups.
Here are the breakouts of the percent increases in the number of projects by Application Type:
- Business Systems: 14%
- Engineering Systems: 63%
- Real Time Systems: 144%
Below you will find an infographic outlining some of the differences in quality between 2010 and 2013.
Version 5.0 of the QSM's Function Point Gearing Factor table is live!
Having worked in sales and customer service at QSM for over 17 years, I speak to hundreds of professionals each year that are directly or indirectly involved with software development projects using many different development processes. One of the things that I hear from time to time is that estimating is not as important when working with more iterative development methodologies. Some of the reasons I hear most often are that “team sizes are smaller,” “work can be deferred until the next iteration,” “we are different,” and “we are agile.”
Large companies often seem to have a few people in key positions with extra time on their hands. Occasionally, this time is used to invent acronyms that are supposed to embody corporate ideals. Mercifully, these usually fade away in time. A former employer of mine had two beauties: LOCOPRO (Low Cost Provider) and BEGOR (Best Guaranteer of Results). Unfortunately, besides being grating on the ear, LOCOPRO and BEGOR don’t always march in tandem. LOCOPRO deals with cost and the effort required to deliver something. BEGOR is a bit more amorphous dealing with quality and an organization’s efficiency and consistency in meeting requirements.
What are the normal requirements for a software project? Here’s my short list.
Monday morning I received an email that read:
You can set your clocks to it: the birds flying north for spring, daylight savings time, and this email being sent on the Sunday before the tournament begins. That's right, March Madness is upon us my friends, and we’ve officially made it through winter.
The message continued with details about how to participate, but as you can see, it’s time for QSM’s annual March Madness tournament. So how do I justify spending company time filling out brackets? By blogging about how this is actually related to project management. As I went through the exercise of predicting the course of this tournament, I realized that many of the thoughts I had also go through the minds of project managers.
It’s easy to get confused or overly concerned about measuring velocity. Actually, the concept is almost embarrassingly simple. Velocity in Agile is simply the number of units of work completed in a certain interval. Like in many fields, Agile proponents appropriated existing terminology.
Here is one typical definition, from agilesoftwaredevelopment.com:
In Scrum, Velocity is how much product backlog effort a team can handle in one Sprint. Velocity is usually measured in story points or ideal days per Sprint… This way, if the team delivered software for 30 story points in the last Sprint their Velocity is 30.
Velocity as a capacity planning tool used in Agile software development is calculated from the results of several completed sprints. This velocity is then used in planning future sprints.
In his book, The Functional Art, Alberto Cairo sets out to explain what data visualizations are, why it is significant to pair data and design, and how to assess whether a data visualization is "good" or not. In the first chapter, Cairo presents an example from Matt Ridley's book, The Rational Optimist: How Prosperity Evolves. Ridley asserted that the global population was decreasing over time, using only one line chart.