Team Size

Team Size

Part II: Small Teams Deliver Lower Cost, Higher Quality

This is the second post in a three part investigation of how team size affects project performance, cost, quality, and productivity. Part one looked at cost and schedule performance for Best in Class and Worst in Class IT projects. For this study, Best in Class projects were those that delivered more than one standard deviation faster, but used more than one standard deviation less effort than the industry average for projects of the same size. A key characteristic of these top performing projects was the use of small teams: median team size for best in class projects was 4 FTEs (full time equivalent) people versus 17 FTEs for the worst performers.

What is the relationship between team size and management metrics like cost and defects? To find out, I recently looked at 1060 medium and high confidence IT projects completed between 2005 and 2011. These projects were drawn from the QSM database of over 10,000 completed software projects. The projects were divided into two staffing bins:

  • Small team projects (4 or fewer FTE staff)
  • Large team projects (5 or more FTE staff)

Average Staff vs. System Size

These size bins bracket the median team size of 4.6 for the overall sample, producing roughly equal groups of projects that cover the same size range. Our best/worst in class study found a 4 to 1 team size ratio between the best and worst performers. 

Blog Post Categories 
Team Size

Top Performing Projects Use Small Teams

Last week, Carl Erickson of Atomic Spin referenced a study performed by Doug Putnam several years ago:

A study done by consultancy QSM in 2005 seems to indicate that smaller teams are more efficient than larger teams. Not just a little more efficient, but dramatically more efficient. QSM maintains a database of 4000+ projects. For this study they looked at 564 information systems projects done since 2002. (The author of the study claims their data for real-time embedded systems projects showed similar results.) They divided the data into “small” teams (less than 5 people) and “large” teams (greater than 20 people).

To complete projects of 100,000 equivalent source lines of code (a measure of the size of the project) they found the large teams took 8.92 months, and the small teams took 9.12 months. In other words, the large teams just barely (by a week or so) beat the small teams in finishing the project!

Since then, QSM has performed several studies investigating the relationship between team size and metrics like project scope, productivity, effort/cost, and reliability. The results have been surprisingly consistent regardless of application domain, technology, or year group.  I’ll be reviewing what we found in a series of posts.

Blog Post Categories 
Team Size

Technology Can Only Do So Much

It’s hard to believe it’s been 36 years since an IBM manager named Fred Brooks came out with his seminal insights about software development, the most famous of which ("adding more people to a late software project makes it later") came to be known as Brooks’ Law. These days, most software professionals accept and appreciate Brooks’ analysis, yet we continue to make the very mistakes that prompted him to write The Mythical Man Month!

Which leads to an interesting question: armed with such a clear and compelling argument against piling on staff at the last minute, why do we repeatedly employ a strategy that not only fails to achieve the hoped-for schedule reductions but often results in buggy, unreliable software?

The most likely answer combines schedule pressure with the human tendency to over-optimism. Basing plans on hope rather than experience is encouraged by a constant parade of new tools and methods. Faced with the pressure to win business, please customers and maintain market share, is it really surprising that new  technologies tempt us to discount the past and hope that – if we use this tool, this team, this methodology - this project will be different?

How can software developers counter the human tendency to fall for overly optimistic estimates and unachievable schedules?

What's needed is perspective: the kind of perspective that comes from honestly examining - and reminding ourselves - how things have worked in the past. In a paper called, “Technology Can Only Do So Much”, I look at the human and technological factors that trip up so many software projects.  Good historical data provides a sound empirical baseline, against which both conventional wisdom and future plans can be assessed.

 

Blog Post Categories 
Metrics Team Size Estimation

Part IV: Duration, Team Size, and Productivity

For many projects, duration is just as important a constraint as cost. In this installment we will tackle the question:  How do changes to team size affect project duration and the resulting productivity?  Once again we will use our database of business applications completed since January, 2000.

Continue reading...

Blog Post Categories 
Team Size Productivity

"Managing Productivity with Proper Staffing" Webinar Replay Available

Just before the holidays, we hosted our first in-house webinar, "Managing Productivity with Proper Staffing Strategies." Confronted with challenges presented by the current economy, we see more and more systems development groups trying to do more with less.  The ultimate goal is to maximize productivity and minimize defects, but many teams struggle to get there.  It is possible, but the most effective methods used to achieve maximum efficiency are counter-intuitive.  People always think more effort will produce more product.  The fact is using less effort is often more effective.  Presented by industry expert, John Bryant, this webinar explains and proves the correct way to maximize productivity while at the same time minimizing cost and defects. 

 

John Bryant has over forty years of IT experience.  He spent the last several years using the SLIM Suite of Tools to improve the software development process by properly estimating, tracking, and analyzing software development efforts.  His expertise includes project management, teaching and mentoring from initial project evaluation and planning through construction. 

 

In case you missed it, you can view the replay of this webinar here

Blog Post Categories 
Defects Team Size Webinars

An Empirical Examination of Brooks' Law

Building on some interesting research performed by QSM's Don Beckett, I take a look at how Brooks' Law stacks up against a sample of large projects from our database:

Does adding staff to a late project only make it later? It's hard to tell. Large team projects, on the whole, did not take notably longer than average. For small projects the strategy had some benefit, keeping deliveries at or below the industry average, but this advantage disappeared at the 100,000 line of code mark. At best, aggressive staffing may keep a project's schedule within the normal range of variability.

Contrary to Brooks' law, for large projects the more dramatic impacts of bulking up on staff showed up in quality and cost. Software systems developed using large teams had more defects than average, which would adversely affect customer satisfaction and, perhaps repeat business. The cost was anywhere from 3 times greater than average for a 50,000 line of code system up to almost 8 times as large for a 1 million line of code system. Overall, mega-staffing a project is a strategy with few tangible benefits that should be avoided unless you have a gun pointed at your head. One suspects some of these projects found themselves in that situation: between a rock and a hard place.

How do managers avoid these types of scenarios? Software development remains a tricky blend of people and technical skills, but having solid data at your fingertips and challenging the conventional wisdom wisely can help you avoid costly mistakes.

Read the full post here.

Blog Post Categories 
Metrics Team Size