- Capture some historical data on your projects and keep it simple. The more data, the better, but you can get a good start to your estimation program with just a few projects and a small amount of data from each of those projects. Focus on the core metrics: size, duration, reliability, productivity, and effort.
- Estimate at the release level before detailed planning takes place. This will enable you to tailor your detailed plan to goals that are reasonable. Many analysts spend hours laying out detailed plans for projects that end up over budget and late because they don’t figure out the big picture first.
- Use an empirically-based model that enables you to manage uncertainty. When making big decisions, it’s important to see the 90% chance compared to the 50%.
- Sanity-check your estimates with industry analytics. It’s always good to see typical cost and duration trends from projects that are similar to yours.
Keith Ciocco's blog
Today more than ever we have access to large amounts of information. You've probably heard the term "big data," which in essence is having access to large amounts of data and examining the trends in that data. But many executives want to know how they can leverage this information to solve business problems, like lowering IT costs. One way is to use the data to do a better job of estimating IT projects.
Better estimating helps avoid signing up to schedules and budgets that are unrealistic; it helps avoid overstaffing a project or a portfolio of projects; and it helps calculate how much work can be completed within project constraints. In addition, it improves communication internally across the enterprise and externally between the vendor and the client. You can apply estimation to in-house projects and you can use it to generate better proposals or to do a better job of evaluating proposals. It can also help you negotiate more effectively.
To do a better job of estimating, you need to make good decisions regarding which metrics to leverage. You might have thousands of data points, but it's important to streamline the focus to the core release level metrics: cost, duration, effort, reliability, and productivity. Next, you need to find a centralized place to organize and store the data so you can analyze it. There are tools out there that can help you. In the view below, you can see a portfolio of projects stored in a centralized place with the ability to manage the access and security.
It’s a story we hear a lot in the software business these days, especially with agile development. New functionality is needed within a certain amount of time and within a certain budget.
Some might say, "no problem! We can figure it out as we go along." They might feel comfortable because each sprint has already been set in stone. But there are business-related questions that need to be answered before sprint-level planning takes place and before we commit to goals that might not be achievable at the release and portfolio levels. Should we agree to do this project? Can we really get all of the work done within our constraints? Will the software be reliable at delivery? How does this project impact our annual and multi-year forecasts?
This is where having reliable big picture numbers can be helpful. Wouldn’t it be great if senior management and the technical team were on the same page early? There are empirically-based estimation tools that can help. The naysayers might say that the technical requirements aren’t firm enough to come up with early estimates before the sprint planning takes place. But the fact is that some of these models (the good ones) allow for managing uncertainty and they do it based on historical data. The slide below shows a summary example of a release-level estimate for cost, duration, and reliability.
The QSM team had a productive trip to the Gartner Symposium in Orlando. It's always helpful for us to discuss IT current trends and challenges with the people in our industry. Many of these themes came to light as we provided SLIM-Suite product demonstrations along with question and answer sessions at the QSM exhibit.
One of the big areas of interest at the conference was IT cost optimization, which is also one of QSM’s main areas of expertise. I hosted a presentation called “Cost Optimization Best Practices for IT Portfolio Budgeting.” The main focus of the presentation was to show how we can leverage empirically-based models and predictive analytics to balance enterprise demand with capacity and at the same time save big money in the IT budgeting process. The presentation was well-attended and a meet and greet session followed where our QSM team, consisting of Ethan Avery, Richard Pelaez, Greta Moen, and I, provided solution demonstrations and answered questions.
Another big focus of the conference was related to cloud solutions and how they will affect the internet of things and artificial intelligence. Our team featured our cloud solution, SLIM-Collaborate, which provides portfolio analytics and the ability to estimate the cost and risk of creating new software technologies. We provided examples of how we support all types of software & systems projects and explained the benefits of having a secure process for leveraging this information across the enterprise.
It’s that time of year again for many C-level executives: time to figure out the IT budget for next year. This is to bring the business side of the organization to the table with the technical side to forecast how much IT is going to spend. It can be a complicated process, but there are ways to make it easier and more accurate; and there are ways to save a lot of time and money. The challenges often relate to short planning time frames, minimal information available to generate accurate forecasts, political agendas within the organization, and, unfortunately, only a small number of estimation methods in place. But there are tools and processes available to help face these challenges. Here are the basic steps that we recommend for cost optimization in the budgeting process.
Start by analyzing the historical data that is available. The process can be streamlined by focusing on the core metrics within the organization. This data can include release level size, effort, staff, and duration information. Historical data showing typical effort by role by month spending is also valuable to leverage. Ideally, this type of data should be captured on 8-15 projects.
The next step is to pull together scope level sizing data on projects that are being considered for the new year. This information can include epics, themes, user stories, business requirements, or use cases, to name a few. The goal here is to get as close as possible to determining how much work needs to be done on each release in the pipeline. Once there is a large enough sample of data, then release level estimates can be created for the coming year. There are tools available to help streamline this process and the best ones allow for risk mitigation and sanity checking with historical data.
With agile projects, we hear a lot about the planning benefits of having a fixed number of people with a fixed number of sprints. All great stuff when it comes to finishing on time and within budget. But one of the things we also need to focus on is the quality of the software. We often hear stories about functionality getting put on hold because of reliability goals not being met.
There are some agile estimation models available to help with this, and they can provide this information at the release level, before the project starts or during those early sprints. They provide this information by leveraging historical data along with time-tested forecasting models that are built to support agile projects.
In the first view, you can see the estimate for the number of defects remaining. This is a big picture view of the overall release. Product managers and anyone concerned with client satisfaction can use these models to predict when the software will be reliable enough for delivery to the customer.
In the second view, you can see the total MTTD (Mean Time to Defect) and the MTTD by severity level. The MTTD is the amount of time that elapses between discovered defects. Each chart shows the months progressing on the horizontal axis and the MTTD (in days) improve over time on the vertical axis.
Organizations often come to us in the early stages of shopping for a software estimation tool and, oftentimes we find that they could be asking some additional questions. They often focus on the tool’s operating system, database structure, and architecture, when they could also be focusing on the quality of the data behind the tool. They also ask a lot of questions about inputting detailed information when really it would be in their best interest to focus on solid project-level information since detail-level inputs are often not available early in the planning lifecycle. Instead of focusing on the number of hours allotted to each individual person, it would be more beneficial to focus on how much work the overall team needs to finish.
In our 30+ years of experience in this industry, we've found that, no matter what tool an organization ultimately chooses, they need to be asking the right questions. Here is the criteria they should consider.
As with any tool, it is important to match the tool with the job at hand. Using a screw driver to perform the task of a chisel will yield poor results. The same is true with trying to use a detailed planning tool in place of a software estimation tool. Make sure that you consider at what point in time formal estimates are required and how the resulting information is used to support negotiation and business decision making. Here are the main issues that should be taken into consideration when assessing an estimation tool.
Often within technology organizations there is a general belief that increasing staff increases the amount of production. But what if there were better options? Wouldn’t it be great to see some additional management options using predictive analytics? This type of analysis could save organizations millions of dollars by showing how to hit their goals by just planning more effectively.
Where do you start? First, we recommend centralizing your project data so your information can be easily accessed. These projects can be completed, in-progress, or getting ready to start. The best way to do this is with a tool that lets you store the data and that also lets you generate the forecasts, all in one convenient place.
The next step would be to run built-in forecasting models to see if you can complete the required amount of work with the existing number of resources. These models also provide other options to consider, like adjusting the number of resources on a software release or extending a project schedule to save money. The best models are empirically-based and time-tested. To generate the analysis, you need to enter some basic project level goals and the models then leverage historical data to forecast a reliable duration, effort, cost, and staffing assessment for each release.
What if you could leverage summary level cost, duration, and productivity data to support estimates for future projects, at the release and enterprise level? C-level executives, development managers, and project stakeholders are all involved at some level in project planning. They want quick access to information on a regular basis and they want web-based solutions to make it happen. So how does it all work? There are web-based analytics tools that allow you to create a centralized database for all of your projects. These tools store the data, leverage it to generate project and portfolio estimates, and then provide a communication vehicle throughout the organization to ensure that everyone involved is on the same page. It all starts with having the data in one place.
Once you have all of your project data in one place, then you can focus on analyzing the completed projects. You can compare them against industry trends and leverage a 5-star report to show how they rate on performance in the industry. The initial measures to focus on would be size, duration, effort, reliability, and productivity. A project's productivity will be calculated automatically once you have entered the size, duration and effort. We call this measure a Productivity Index. This measure can be compared to industry and used as a benchmark to measure process improvements over time. These numbers give you a quantitative picture of your current project environment.
I am a big sports fan and since I work for a software metrics company, I started thinking about the similarities in productivity measurement in both industries. From draft picks to game planning decisions, managers in sports measure their team’s productivity to help them make better decisions in the future. Software executives and product owners do the same thing; they measure productivity in order to make better planning decisions regarding upcoming projects.
To measure productivity in baseball, we look at measures like batting average, on-base percentage, slugging percentage, and earned run average. When measuring the productivity of agile software projects, we often look at the velocity, which takes into account the number of user stories completed in each sprint. This type of historical data helps us plan effectively at the detailed level.
As part of our work at QSM, we are often asked to provide plans at the release and portfolio level. To provide these plans reliably, we use a macro level, empirically-based productivity measure that encapsulates a number of project-related factors. This measure is called the Productivity Index, an integral part of the Putnam Model. Also known as the SLIM Model, the Putnam Model was invented by Larry Putnam Sr. almost 40 years ago and is having a big impact on software measurement more than ever today.
Once we know the total number of user stories (or any size measure), the release level effort, and the duration, we can calculate the Productivity Index of a project. The Productivity Index also takes into account the project environment including: the experience level of the team, the complexity of the software, and the quality of the tools and methods being used on the project.