QSM would like to thank our fantastic presenters and all that attended our virtual conference. More than just one-dimensional presentations, each session included interactive Q&As with active participation from our audience. It was great to hear such a wide variety of presentations covering project management success in many different areas, including the return on investment from estimating agile projects, the application of the SLIM tools to government outsourcing programs, and the benefits of leveraging flexible sizing techniques and QSM estimation methods. If you were unable to attend last week, you can find replays of all of the presentations below. We encourage you to reach out with any additional questions or feedback.
QSM turns 40 this September! To celebrate this milestone, we will be hosting a free, all-day virtual conference on September 19th for current clients and those looking to learn more about software estimation best practices. The conference will feature presentations from IBM, Microsoft, Progressive Insurance, Rockwell Automation, and KPMG, as well as new developments in QSM research and tools. Below you will find a list of speakers and presentations.
"Our Evolution Around Estimation" - Lenny Fenster, Microsoft Services
"Strengthening Estimation Governance in a Large-Scale Organization with SLIM-Collaborate" - Christophe Guillou and Angelo Moore, IBM Global Services
"Using Project History to Produce Estimates" - Daniel Horvath, Progressive Insurance
"Big Rock Estimation with SLIM Estimate" - Aaron Jeutter, Rockwell Automation
"Five Core Metrics to Reduce Outsourced Software Project Failure" - Joseph Madden, KPMG
"Understanding the Physics of Software Development" - Larry Putnam, Jr., QSM
"The Evolution of SLIM-Suite Tools" - Kate Armel and Laura Zuber, QSM
In many agile environments, the budget, team size, and schedule are fixed based on an organization’s predetermined targets for sprints or iterations. This leads many project managers to question if software estimation is even necessary. The problem is, without a reliable size estimate, the amount of functionality promised within the time and money constraints could be difficult to achieve and could cause the product delivery to be short on features, or late and over budget.
This is where scope-level estimation tools come into play. They can help evaluate whether targets are reasonable and, even if the schedule and budget are both set in stone, they can help figure out how much work can be delivered. This type of analysis helps set customer expectations and provides data driven leverage for negotiations.
The best estimation tools leverage empirically-based models, industry analytics, and historical data. They can even be used before iteration level planning takes place. They ensure that the overall goals are reasonable before detailed plans are developed.
In the three views below, we see an estimate generated from a “Time Boxed” method. This is where the product manager was able to input the predetermined time, a productivity measure (PI), and a team size, to see how many story points could be completed within the set constraints. The analysis also includes a “sanity check” of the estimate, comparing it to an agile industry trend from the QSM Industry Database and their own agile historical data.
The application of contingency buffer, more commonly known as “padding” or “management reserve” is the final step in any project estimation process. The most common practice is for the estimator to use an intuitive multiplier which is added to base estimate. Unfortunately, everyone has a different multiplier which is shaped by their own personal bias about risk and it is hidden in their head. This creates a fundamental problem with transparency and consistency within most organizations.
Fortunately, there's a better way. One solution is to define and configure agreed upon standards that are matched to specific business risk situations. These should be collaboratively agreed to by all the stake holders in the organization. Then they can be codified into a configuration that can be selected at the time when contingencies are typically applied to an estimate. This helps solve the consistency issue.
To attack the transparency issue, you can use a technique of overlays to visualize the contingency in comparison to the base estimate.
No one starts a software project thinking that it is doomed to fail, but many projects end up falling far short of expectations. A recent PMI report shows that a significant number of companies are still underperforming expectations - failing to deliver software that functions as intended and drives positive business results. PMI’s report breaks out project development teams into two distinct camps: “overachievers” and “underachievers,” where the former are delivering projects on time and on budget, while the later is not. In this article for Project Manager Today, Larry Putnam, Jr. identifies five traps that the "overachieving" organizations are successfully avoiding, and better strategies that can be used in their place.
During QSM’s 40 years in business we have often been asked to help with software projects that are out of control and riddled with unrealistic goals and soaring costs. Project managers often ask, "where is the light at the end of the tunnel?" In honor of Larry Putnam, Sr., who started QSM back in 1978, here are 10 tips for better project tracking and oversight.
For over 20 years I’ve been an advocate of using metrics for improving IT processes. Shortly into my career as a COBOL developer, I was introduced to Function Point Analysis; and ever since it’s been the most powerful tool in my toolkit. After all: size matters! Once I learned to quantify the amount of functionality delivered by a project or an application, I could zoom in on cost, effort, duration, productivity, and quality because I now had a normalization factor to perform comparisons (Cost per Function Point, Hours per Function Point, Defects per Function Point, etc.).
Shortly after getting my Certified Function Point Specialist certification, I became obsessed with the different measures and metrics pertaining to software and IT. Soon I became a Certified Software Measurement Specialist, where I learned everything there was to know about how to measure everything there was to measure in software (or so I thought). It’s a pretty powerful feeling being able to help organizations baseline their current capabilities so they could determine if implementing the latest and greatest silver bullet was really going to give them the gains in productivity they had been striving for.
Scaled Agile (SAFe) is a methodology that applies Agile concepts to large complex environments. QSM recently worked with an organization that had implemented SAFe to develop an estimation methodology specifically tailored to it. This article discusses how it was implemented.
Software estimation typically addresses three concerns: staffing, cost/effort, and schedule. In the SAFe environment, however, development is done in program increments (PI) that in this case were three months in duration with two-week sprints throughout. Staffing was set at a predetermined level and varied very little during the PI. Thus, the three variable elements that are normally estimated (staff, cost/effort, and schedule) had already been determined in advance. So, our job was done, right? Wrong! What remained to be determined was capacity: the amount to be accomplished in a single PI. And that was a very sore “pain point” for the organization.
Why do projects fail? There are a multitude of reasons from lack of up-front planning to failing to make necessary adjustments as requirements change to overstaffing when the project is running late. Whatever the reason, there are steps you can take to avoid these common traps. In this article for Software Executive Magazine, Larry Putnam, Jr. explains how focusing on scope-based estimates, agile forecasting, and smaller teams will help your development team deliver products on time and according to budget.
I was recently reading an article by Moira Alexander titled “Why Planning Is the Most Critical Step in Project Management” and I was stuck by her observation that one of the primary reasons that projects fail is because they commit to unrealistic expectations. In my 35 years of experience, I believe this is the number one reason projects fail. Yet it is competence that few organizations or product owners ever get good at.
Today there are good simulation tools that make it simple to establish realistic project boundaries. The results can be used effectively to communicate and negotiate expectations with clients.
For example, imagine that you are a product owner planning out your next release. Your team of 10 people has been working on a 5-month release cadence. A backlog refinement has shown that there are approximately 100 story points to be completed in this release. The project plan is shown in the figure below.
However, there are some uncertainties and we need to deal with them in a realistic way. Since the schedule and the team size are fixed, the only area that can give is the functionality. Simulations are a great way to quantify uncertainty. In our case, we are confident in our team’s productivity and labor cost, but we are somewhat more uncertain about the new capabilities in this release. It is easy to adjust the uncertainty settings and run a business simulation. The uncertainty slider bars are in the image below.