Frequently Asked Questions About Software SizingPosted By Michael Mah on Tue, 2011-11-15 16:01
Software is everywhere in modern life - from automobiles, airplanes, utilities, banks, to complex systems and global communications networks. They run the gamut from tiny applets that comprise just a handful of instructions to giant systems running millions of lines of code that take years to build.
Software professionals are at the front lines of this information revolution. This post addresses Frequently Asked Questions about measuring the size of software applications after they’re finished and estimating the work for a project that has yet to be started. We hope it will help software professionals do a better job of describing what they are building for their companies as software continues to grow in strategic importance to our companies and to our daily lives.
Question: What do we mean by the term, “Software Size”
Answer: For starters, think of T-shirts – Small, Medium, Large, or Extra-Large, or houses that can range from a small summer cottage all the way up to a 20,000 sq ft Hollywood mansion on a sprawling estate.
So it goes with software. You can have a small program with a few cool features, or a huge, complex computerized trading system for the New York Stock Exchange comprised of millions of lines of code, and everything in-between.
Question: I have a large project and its size is 20 people. Is that what you mean?
Answer: Not quite. That’s actually the number of the people on your team, or number of staff resources on the project. It’s not the amount of functionality, or the volume of software created by a team of that size.
Question: Ok, so you’re saying that small feature sets for a software program - or a long list of features - is what you mean by the size of the software. Do you also mean lines of code?
Answer: That’s another way to look at it. Generally speaking, to complete a project that satisfies hundreds of requirements or feature requests takes a lot more working code than a tiny applet with five simple features. We talk about things like “Units of Need” which describe the software capabilities that people might request. These can be things like features and requirements/functionality. An intermediate vocabulary that starts to translate these features into the technical world includes terms like technical requirements, Function Points, or in the Agile realm, Stories, and Story Points.
We think of “Units of Work” to describe what developers produce in the software realm, like the number of programs, objects, subroutines, and ultimately, working software code – to satisfy the “Units of Need” that customers ask for. A team of system architects, designers, programmers, and testers ultimately create working software to produce this functionality, in a given programming language. Computers run on software code – not feature lists. This working code is what programmers - with their artful designs and technical prowess - design, code, and test.
Ideally, we want simple designs that are clean and elegant. Simple designs, where possible, are often produced faster with less effort. They also tend to be more reliable and easier to maintain. The converse is a sloppy design with lots of “spaghetti code” that’s buggy, requires more rework and longer testing. This often takes more time and costs more, in the long run.
Question: If what you’re saying is true, wouldn’t you expect it to take less code to produce a feature, or as they say in Agile terminology, a User Story?
Answer: Exactly! Less code takes less time, requires fewer effort hours to build and test, and tends to be more reliable. That means you don’t have to test as much, and you can finish sooner - hence you’re more productive.
Also, to understand these relationships – how much code it takes to produce a Function Point, a feature, or a User Story, can be very valuable. It’s like a currency conversion from dollars to euros or changing units from miles to kilometers. If you have a good handle on this conversion, which at QSM we call “Gearing Factors,” you can move from one realm to another fairly easily. Early on in a project, if you think you have to build 40 to 50 features, you can come up with an assessment of the amount of software that might be required.
Question: Should we count using Function Points? I heard that this is an industry standard.
Answer: It depends on whether this metric is an appropriate fit for what you do. In the 1980s, Alan Albrecht at IBM described the architecture for systems of that era using Function Points which are comprised of a CRUD environment – Create, Read, Update, Delete - against an underlying database. That’s what mostly IBM mainframe batch processes performed.
If that’s the fundamental architecture for what you build today, then describing Units of Need in that vocabulary can work. Ultimately, to deliver working software to deliver a certain number of Function Points also requires a certain amount of code. (Counting Function Points is a laborious manual process. Code counts can be automated with a tool.)
However, you might build software that flies airplanes, runs under a distributed architecture with wireless capabilities and online error checking and diagnostics, or something in the more current modern world that is a long way from a CRUD architecture on a mainframe. In that case, Function Points might not be an easy fit. They require manual labor as well as a specification that’s well documented and not outdated. Many organizations don’t have that.
As we move more toward Agile methods, many teams prefer to describe features as User Stories - with each being described on a complexity scale such as Story Points. These are also ultimately produced through working code. If you find yourself in this world, that might be a better fit.
Read more of QSM's FAQs.
As managing partner at QSM Associates Inc. based in Massachusetts, Michael Mah teaches, writes, and consults to technology companies on measuring, estimating and managing software projects, whether in-house, offshore, waterfall, or agile. He is a frequent conference keynote speaker and is the director of the Benchmarking Practice at the Cutter Consortium, a Boston-based IT think-tank, and served as past editor of the IT Metrics Strategies publication.
You can read more of Michael's work at his blog, Optimal Friction.