If you were unable to attend our recent webinar, Using Benchmarking to Quantify the Benefits of Process Improvement, a replay is now available.
On Thursday, Feb. 7, at 1:00 PM EST, Larry Putnam, Jr. will present Using Benchmarking to Quantify the Benefits of Process Improvement.
Effort seems like a metric that's very straightforward, but there is a lot of complexity here, particularly if you are performing benchmark analysis. Recently, I was tapped to help out with a benchmark assessment. One of the metrics that the customer wanted to analyze was effort per function point. "Effort" on its own is very vague, and while the customer might know which phases or activities his organization uses, I can't be sure that definition will match what I think he wants. In order to effectively benchmark, we need to make an apples-to-apples comparison by examining what is really behind the effort number, so it was necessary to send the client phase and activity definitions.
Here are some helpful definitions to help you understand which activities are included in each phase:
"Everything should be made as simple as possible, but not simpler."
- Albert Einstein
How’s your software measurement program doing? Is it well funded and supported by management, or do you worry about your job the next time the organization decides it needs to be “leaner and meaner”? Many measurement programs are cancelled or fade into meaningless obscurity. Why? Some things are out of your control; but here are a few that will improve your odds for success:
Last week we looked at IT software productivity trends for 1000 completed IT systems and noted that average productivity has declined over the last 15 years.
The post sparked some interesting responses. Two readers wanted to know whether productivity actually increases over time for projects in the same size range? If so, this would be an illustration of Simpson's Paradox: a counterintuitive phenomenon we've seen from time to time in our own research. Simply put, sometimes the direction of a trend is reversed when the sample is broken into categories.
To answer their question, I used our SLIM-Metrics tool to stratify the sample into four size bins:
Recently I attended a seminar on a commercial reporting and data sharing product. In the sales material and discussion, the phrase “Single Version of the Truth” was used several times. But what does it mean?
“In computerized business management, svot, or Single Version of the Truth, is a technical concept describing the data warehousing ideal of having either a single centralised database, or at least a distributed synchronised database, which stores all of an organisation's data in a consistent and non‐redundant form.” - Wikipedia
The concept is attractive to decision makers who collect and analyze information from multiple departments or teams. Here's why:
Regardless of which size measures (Effective SLOC, function points, objects, modules, etc.) your organization uses to measure software size, code counters provide a fast and easy way to measure developed functionality. If your organization uses Effective (new and modified) SLOC, the output from an automated code counter can generally be used "as is". If you use more abstract size measures (function points or requirements, for example), code counts can be used to calculated gearing factors such as average SLOC/FP or SLOC/requirement.
Our recent webinar, "Introduction to the High Performance Benchmark Consortium," was a great success and we are already looking forward to planning our next presentation. Joe Madden received a lot of insightful questions regarding our new consulting program. We are aware that your time is valuable and scheduling can be a challenge, so we have recorded a replay, including Q&A, for anyone who was unable to attend the scheduled webinar.
I am pleased to announce that on Thursday, February 25 at 1:00 PM EST, QSM will be hosting a webinar based on our new High Performance Benchmark Consortium.
QSM has introduced a program specifically designed to help software development or acquisition organizations quantify and demonstrate performance improvement over time. The High Performance Benchmark Consortium is for clients who want to be best in class software producers and are willing to be active participants in the program. In today’s economic environment it is more important than ever for both suppliers and acquirers to compete more effectively and provide value to their customers. Members of the Consortium gain access to proprietary research that leverages the QSM historical benchmark database of over 8,000 validated software projects.
Presented by benchmarking expert and head of QSM Consulting, Joe Madden, this webinar will discuss:
QSM consultant Paul Below has posted some quick performance benchmarking tables for IT, engineering class, and real time software.
The tables contain average values for the following metrics at various size increments:
Effort (Person Months)
Average Staff (FTE)
Mean Time to Defect (Days)
SLOC / PM
Two insights that jump out right away:
1. Application complexity is a big productivity driver. IT (Business) software solves relatively straightforward and well understood problems. As algorithmic complexity increases, average duration, effort, team size increase rapidly when compared to IT systems of the same size.