Benchmarking

Benchmarking

How's Your Metrics Program Doing?

"Everything should be made as simple as possible, but not simpler."

-  Albert Einstein

How’s your software measurement program doing?  Is it well funded and supported by management, or do you worry about your job the next time the organization decides it needs to be “leaner and meaner”?  Many measurement programs are cancelled or fade into meaningless obscurity.  Why?  Some things are out of your control; but here are a few that will improve your odds for success:

Blog Post Categories 
Metrics Benchmarking

Simpson's Paradox

Last week we looked at IT software productivity trends for 1000 completed IT systems and noted that average productivity has declined over the last 15 years.

The post sparked some interesting responses. Two readers wanted to know whether productivity actually increases over time for projects in the same size range? If so, this would be an illustration of Simpson's Paradox: a counterintuitive phenomenon we've seen from time to time in our own research. Simply put, sometimes the direction of a trend is reversed when the sample is broken into categories.

To answer their question, I used our SLIM-Metrics tool to stratify the sample into four size bins:

Under 5000 Effective (new + modified) SLOC
5000- <10000 Effective (new + modified) SLOC
10000-<20000 Effective (new + modified) SLOC
20000-<30000 Effective (new + modified) SLOC

These 4 size bins span a little over two thirds of the data. As a sanity check, I applied the same queries to both the original sample of 1000 IT projects and a larger sample of nearly 2200 IT projects. As the following chart shows, stratifying the data into size bins doesn't affect the overall direction of the trend:

Productivity over Time

For conventional productivity (FP/Person Month) the decline in productivity was even more pronounced:

FP per PM over time

Blog Post Categories 
Metrics Benchmarking Productivity

Software Mythbusters: The Single Version of the Truth

Recently I attended a seminar on a commercial reporting and data sharing product. In the sales material and discussion, the phrase “Single Version of the Truth” was used several times. But what does it mean?

“In computerized business management, svot, or Single Version of the Truth, is a technical concept describing the data warehousing ideal of having either a single centralised database, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent and non‐redundant form.” - Wikipedia

The concept is attractive to decision makers who collect and analyze information from multiple departments or teams. Here's why:

“Since the dawn of MIS (Management Information Systems), the most important objective has been to create a single version of the truth. That is, a single set of reports and definitions for all business terms, to make sure every manager has the same understanding.”

Sounds simple, doesn’t it? Sales pitches for svot imply that if distributed data sources were linked into a single master repository, the problem of unambiguous, consistent reporting and analysis would be solved. Yet reports are often based on different data using different definitions, different collection processes, and different reporting criteria.

Blog Post Categories 
Benchmarking Software Mythbusters

Code Counters and Size Measurement

Regardless of which size measures (Effective SLOC, function points, objects, modules, etc.) your organization uses to measure software size, code counters provide a fast and easy way to measure developed functionality. If your organization uses Effective (new and modified) SLOC, the output from an automated code counter can generally be used "as is". If you use more abstract size measures (function points or requirements, for example), code counts can be used to calculated gearing factors such as average SLOC/FP or SLOC/requirement.

The QSM Code Counters page has been updated and extended to include both updated version information and additional code counters. Though QSM neither endorses nor recommends the use of any particular code counting tool, we hope the code counter page will be a useful resource that supports both size estimation and the collection of historical data.

Blog Post Categories 
Benchmarking Software Sizing Estimation

Replay Now Available for QSM's High Performance Benchmark Consortium Webinar

Our recent webinar, "Introduction to the High Performance Benchmark Consortium," was a great success and we are already looking forward to planning our next presentation.  Joe Madden received a lot of insightful questions regarding our new consulting program.  We are aware that your time is valuable and scheduling can be a challenge, so we have recorded a replay, including Q&A, for anyone who was unable to attend the scheduled webinar.

 

To view the replay, click here.

 

Blog Post Categories 
Webinars Benchmarking Consulting

High Performance Benchmark Consortium Webinar Announced

I am pleased to announce that on Thursday, February 25 at 1:00 PM EST, QSM will be hosting a webinar based on our new High Performance Benchmark Consortium.

QSM has introduced a program specifically designed to help software development or acquisition organizations quantify and demonstrate performance improvement over time. The High Performance Benchmark Consortium is for clients who want to be best in class software producers and are willing to be active participants in the program. In today’s economic environment it is more important than ever for both suppliers and acquirers to compete more effectively and provide value to their customers. Members of the Consortium gain access to proprietary research that leverages the QSM historical benchmark database of over 8,000 validated software projects.

Presented by benchmarking expert and head of QSM Consulting, Joe Madden, this webinar will discuss:

  • the major components of the program
  • the different levels of membership participation
  • the benefits of being a member
  • sample deliverables that a typical member would receive

To register for this event, simply follow this link and click "Register."

Blog Post Categories 
Webinars Benchmarking

Performance Benchmarking Tables

QSM consultant Paul Below has posted some quick performance benchmarking tables for IT, engineering class, and real time software.

The tables contain average values for the following metrics at various size increments:

Schedule (months)

Effort (Person Months)

Average Staff (FTE)

Mean Time to Defect (Days)

SLOC / PM

Two insights that jump out right away:

1. Application complexity is a big productivity driver. IT (Business) software solves relatively straightforward and well understood problems. As algorithmic complexity increases, average duration, effort, team size increase rapidly when compared to IT systems of the same size.

2. Small teams and small projects produce fewer defects. Projects over 100 effective (new and modified) source lines of code all averaged Mean Times to Defect of under one day. We see this over and over again in the QSM database: small projects with small teams consistently produce higher reliability at delivery.

Blog Post Categories 
Benchmarking