Stripped down to the bare bones, value in software estimation measures the functionality that a software product provides to its users (both human and non-human) while production cost measures not just value but the work that is required to deliver that functionality. Software estimates need to account for both. Examples of non-functional cost items include configuration, throw-away code, cloud architecture, and quality requirements. Size measures such as IFPUG and NESMA function points quantify value (delivered functionality) and are recognized as functional size measures. Both measures intentionally ignore technical requirements. They can be very useful when used for asset management, measuring scope creep on a project, or assessing software quality (defect density per delivered unit). For estimating they are an important input; but one that needs to be supplemented to reflect the non-functional cost factors: i.e. what needs to be done behind the scene to create that functionality.
Donald Beckett's blog
During a recent consulting engagement, a customer asked if the QSM defect discovery model applied to Agile projects. Of course, the best (and only) way to determine this was empirically. From our database we extracted a sample of business IT projects that had completed since 2013 that recorded pre-implementation defects. 81 of these projects were Agile and 354 did not specify Agile as their development methodology. We created average trend lines for both datasets and they displayed very similar patterns that conformed to the QSM defect discovery model. This allowed us to answer our customer’s question affirmatively.
Having a large project sample at hand and being curious, we decided to compare these metrics:
- Mean time to defect (which measures the average time a system runs defect-free in the first month after implementation)
- Average development time in months
In a nutshell, the Agile and non-Agile projects used very similar staff sizes. The Agile projects completed sooner and expended slightly less effort. Quality was where the two project sets differed significantly. Pre-implementation, Agile projects recorded fewer defects than non-Agile ones. However, post-implementation the non-Agile projects operated longer between discovering defects in production than did Agile projects.
Sound financial practices are a core value of any successful enterprise; and should be. It may come as a surprise that monitoring money spent against planned expenditures is not the best way to evaluate the progress of software projects. The reason is simple: by the time financial measures indicate that a project is off track, it is often too late to take effective corrective actions or identify alternative courses of action.
Here is an example that illustrates this. Let’s take a hypothetical project plan with these characteristics:
- Planned project duration of 1 year
- Full time staff of 6 for the length of the project
- Billing rate of $100/hour
- 335 business requirements to complete
- Project begins at the start of June and is scheduled to complete May 31 of the following year
According to this plan, the project should have a labor cost $1.245 million. Now, using a software project monitoring tool, SLIM-Control, let’s see what the project looks like at the end of September.
If we only look at money spent, the project is on track since planned and actual expenditures are exactly the same. However, when we look at the progress of the actual work completed, a different story emerges. The project got off to a slow start and the gap between what was planned and what has been delivered has increased every month. Unless this is rectified, the project will last longer and cost more than originally planned. Here is a forecast of what will happen if the current trend continues. The project will complete over two months late and cost an additional $215,000.
In all production environments, there exists a tension between competing outcomes. Four variables come to mind:
These do not exist independently of one another. Emphasizing any one impacts the others. For example, to compress a project’s schedule, additional staff is typically added which increases the cost. Larger team size also increases communication complexity within a project which leads to more defects (lower quality). The development of software presents a unique issue that may not be present or is at least more muted in manufacturing: non-linearity. Key examples of this are the relationships between cost/effort and schedule and the one between schedule and quality.
Let’s look at some examples. In the charts below, regression trend lines for schedule and effort vs. size were developed from the QSM software project database. The darker center lines represent average schedule and effort outcomes as delivered product size grows. The lighter lines are plus and minus 1 standard deviation. Roughly 2/3 of the projects in the database fall between the standard deviation lines. Note the scale on the axes, which is log-log. This is because the relationship between the amount of software developed and schedule duration or effort is non-linear.
6.5 Month Solution
5.85 Month Solution
We hear a lot about software projects that are too large or attempt to do too much in too short of a time. They are very visible and negatively impact both budgets and careers in a not positive manner when they fail. Small projects may fly under the radar. This is a mistake. Most IT projects aren’t large undertakings like Healthcare.gov; rather, they are enhancements and customizations to already existing software systems and account for the majority of most enterprises’ software budget. Planning these projects to be optimally productive is an area in which most companies can realize the greatest returns.
How do you know what is the optimal amount of software to develop in a project? In a newly published software benchmark study QSM analyzed productivity, cost/effort, and time to market of a large sample (over 600) of business IT projects that have recently completed. The projects were divided into quartiles based on the amount of software they developed or customized, which were then compared to each other. Fully ¼ of the projects were smaller than 3,200 implementation units in size or 68 function points for projects that used that size measure. Projects in this quartile had a median productivity of 200 IU per staff month or 5 function points per staff month. The median duration of these projects was slightly more than 3 months. The second quartile contained projects from 3,200 IU up to 8,000 (or 69 to 149 function points). These projects had a median productivity of 377 IU per staff month (or 7.62 function point per staff month) and lasted a little more than 5 months. This is a productivity improvement of 89%. The smaller projects were markedly less productive. So, simply by bundling software work into larger packages there are significant efficiencies to be gained.
Scaled Agile (SAFe) is a methodology that applies Agile concepts to large complex environments. QSM recently worked with an organization that had implemented SAFe to develop an estimation methodology specifically tailored to it. This article discusses how it was implemented.
Software estimation typically addresses three concerns: staffing, cost/effort, and schedule. In the SAFe environment, however, development is done in program increments (PI) that in this case were three months in duration with two-week sprints throughout. Staffing was set at a predetermined level and varied very little during the PI. Thus, the three variable elements that are normally estimated (staff, cost/effort, and schedule) had already been determined in advance. So, our job was done, right? Wrong! What remained to be determined was capacity: the amount to be accomplished in a single PI. And that was a very sore “pain point” for the organization.
I have been a software project estimator for 20 years. Like many people who have worked a long time in their profession, I find myself applying my work experience to other events in my life. So, when a family member tells me that he or she will be back from a trip into town at 3:30, I look at their past performance (project history) and what they propose to do (project plan) and add an hour. Usually, I am closer to the mark than they are.
There is an old adage that if your only tool is a hammer, everything looks like a nail. We use the lessons learned and experience we have gained to address current issues. But if the problem (or software project) we face today is fundamentally different from those we’ve dealt with previously, past experience isn’t the proper framework. In effect, we will be using a hammer when a saw or a chisel might be the tools we need.
The solution, of course, is to first gain an understanding of the problem at hand. What are its defining features? How does it behave? Only then can a proper solution be designed and the appropriate tools selected.
To a large degree, our understanding of how products are developed comes from knowledge gained from manufacturing since the beginning of the Industrial Revolution. Mentally, our first instinct is to try to apply those lessons learned to software development. But there is a huge problem with this approach. The creation of software is not a manufacturing process, but rather a knowledge acquisition and learning process that follows different rules. Here is a simple example. If I have an assembly line and want to double my output, I have several choices. I might add a second shift of workers or I could install an additional assembly line. Because manufacturing is a repetitive process in which design problems are solved before product construction begins, the relationship between labor required and output remains fairly constant. In a nutshell, we already know exactly what we need to do (and how to do it).
Some years ago, the large systems integrator I worked for brought in a new CEO in an attempt to jump start the company. We had lost our position as number one in the industry and leadership had become stagnant and ingrown. The new CEO, who did not have a software background, liked to promise that we could deliver our projects “Faster, Better, and Cheaper." That sounds wonderful, but is rapid process improvement in three dimensions really possible? The short answer is “No” – at least not in the short term. Here’s why.
To deliver a software project faster one of two things has to occur:
- Productivity must increase or
- More effort (cost) must be applied to the project.
Increasing productivity is a long term strategy that entails improving how the organization works. It has nothing to do with mandating unpaid overtime or telling developers to “work smarter.” In fact, those strategies are usually counterproductive.
I am a professional software project estimator. While not blessed with genius, I have put in sufficient time that by Malcolm Gladwell’s 10,000 hour rule, I have paid my dues to be competent. In the past 19 years I have estimated over 2,000 different software projects. For many of these, the critical inputs were provided and all I had to do was the modeling. Other times, I did all of the leg work, too: estimating software size, determining critical constraints, and gathering organizational history to benchmark against. One observation I have taken from years of experience is that there is overwhelming pressure to be more precise with our estimates than can be supported by the data.
In 2010 I attended a software conference in Brasil. As an exercise, the participants were asked to estimate the numerical ranges into which 10 items would fall. The items were such disparate things as the length of coastline in the United States, the gross domestic product of Germany, and the square kilometers in the country of Mali: not things a trivia expert would be likely to know off hand. Of 150 participants, one person made all of the ranges wide enough. One other person (me) got 9 out of 10 correct.