An American Programmer Electronic Reprint letters

This article originally appeared in Vol. 10, no. 11 of AMERICAN PROGRAMMER. Copyright
©1998 by Cutter Information Corp. All rights reserved.


"SOFTWARE BY THE NUMBERS: AN AERIAL VIEW OF THE SOFTWARE METRICS LANDSCAPE"

by Michael C. Mah and Lawrence H. Putnam, Sr.
Foreword by Ed Yourdon

FOREWORD

With all of the new technologies and buzzwords in the computer field, one might wonder why American Programmer returns to fundamental topics -- peopleware, project management, and the like -- so often. One reason is that our industry still isn't doing a very good job at the fundamentals, and another reason is that there always seems to be something new to say about them. Such is the case with software metrics: it's been nearly two years since we last covered the topic, and there are indeed some interesting new things to say.

But Michael Mah and Larry Putnam (both from Quantitative Software Management) remind us that we tend to get distracted by the gadgets and buzzwords in the metrics field, just as we do in other aspects of computing. As they point out, "A key point to consider on metrics is this: rather than just musing on what 'new metric' might apply in the ever-changing world of software, we should also be asking ourselves the more basic question, 'What will we do with metrics?' . . . . The problem oftentimes is that organizations don't have the basics down before they pile on measure after measure." Mah and Putnam provide an excellent summary of the basics; for organizations still in a state of "metrics paralysis," their article is an excellent place to start.

-- Ed Yourdon


 

You are erratic, conflicted, disorganized . . . You lack harmony, cohesion, greatness. It will be your undoing.

-- Alien "Borg" character, addressing humans,
on the television series Star Trek: Voyager

Another playful take on this scenario might be that the "Borg" really was delivering a process assessment using the Software Engineering Institute's (SEI) Capability Maturity Model (CMM) and found the organization at Level 1, otherwise known as "chaos."

All joking aside, no debate on the chaos and overruns of the software industry is complete without discussion of "the M word," metrics. Whatever the outcome, a key point to consider on metrics is this: rather than just musing on what "new metric" might apply in the ever-changing world of software, we should also be asking ourselves the more basic question, "What will we do with metrics?" Then we can ask if the goals at hand require any newfangled measures. The problem oftentimes is that organizations don't have the basics down before they pile on measure after measure.

Based on the answer to the "What will we do?" question, a clearer purpose might arise, giving insight into where energies ought to be channeled. One thing not to do is to measure everything in sight, hoping to figure out what to do with the results later.

TURNING ON YOUR LIGHTS: GUIDANCE SYSTEMS THAT WORK

We manage things "by the numbers" in many aspects of our lives -- stock market indexes, JD Power Quality figures for automobiles, Apgar scores for newborns. These numbers give us insight and help steer our actions. Software metrics extend the concept of "managing by the numbers" into the realm of application development. That is their purpose.

Successful organizations have found three objectives extremely valuable. These are:

  • Knowing the capability, or productivity, of your organization. An overused term, "benchmarking," has been used to describe this.
  • Making credible commitments in terms of what will be delivered, when it will be delivered, how much work effort will be needed, and how good it will be when the system is placed into service. This involves project estimation. Sometimes the "estimation" is how much functionality a team can reasonably commit to build within a deadline that's been mandated by external events (yes, estimation for "the real world").
  • Managing development once it starts. This means making sure that the commitments are on track to being accomplished. This involves project management, but more than simple PERT and Gantt charts. It also requires effective scope or size management and defect management.

Without accomplishing at least these objectives (and there are more than just these three), a "metrics program" might lose perspective.

Reprinted by permission of United Feature Syndicate

 

FASTEN YOUR SEATBELT, WE'RE ALREADY BEHIND SCHEDULE

Let's begin with the real world, complete with all its incredible deadline pressure. The kind of world where an IBM executive was recently quoted as saying -- facetiously, we hope -- "We either have to make our deadlines, or kill our kids." Ugh!

We begin here because many organizations struggling with process chaos claim that they don't have time for software measures, which are sometimes perceived as a bureaucratic Dilbert exercise. But getting metrics in the face of deadlines, and using them to manage that pressure and reduce the chaos, is key to successful software development. Chaos with no measures to tell the organization what's going on leads to studies that still show 3 in 10 projects being canceled, 5 in 10 overrunning schedule and/or budget by nearly double, and only 1.6 in 10 making their deadlines and budgets [5]. Like it or not, Yourdon's noted "death march" projects are sometimes more the rule than the exception, and sadly, the alien Borg might have a point. Our chaos can be our undoing.

We're also seeing that it doesn't have to be that way.

Hopefully, interventions of some sort might break this cycle. In order for that to happen, facts have to be known, communicated, and leveraged. That is where the "mirrors" known as software metrics fit in. Anything that doesn't help manage the pressure but might add to it (if seen as a time waster) would be counterproductive.

WHAT METRICS TO FLY BY? START WITH "THE MINIMUM DATA SET"

Any discussion of metrics has to start with a foundation. Over the years, a consensus has arisen to describe at least a core set of four metrics. These are: size, time, effort, and defects. The SEI has issued a useful publication that discusses the background to the core measures and offers recommendations for their use [2]. There are several additional SEI documents available that go into further depth on the measures individually. And lastly, prior to the SEI, some of the first writings on the core set can be found in [8].

This "minimum data set" links management's bottom line in a cohesive relationship. As project teams, we spend a certain amount of time (months), expending a certain amount of work effort (person-months). At the end of our hard work, the system is ready to be deployed (we hope). It represents a certain amount of functionality (size) at a certain level of quality (defects). Anyone embarking on a measurement program should start with at least these four core measures as a foundation.

Why these four? Well, oftentimes, projects are managed by just two metrics -- project milestones and effort (proportional to cost). This has been described as akin to "flying the plane using only a watch and a fuel gauge." Size and defects must be in the equation. They represent what has been (or will be) built and the quality of the end result. In the end, that is what has been created.

A good manager should be keeping these types of records. For projects that have been completed, size represents what has been built, as countable entities. Knowing what's been accomplished, at what speed, at what cost, and at what level of quality can tell us how well we did. Think "benchmarking," or, "knowing your capability."

In addition, it would be beneficial to add perhaps one or two more metrics for past projects. One is the amount of rework. Another is the degree of software reuse [1].

For projects that have yet to be built, the sizing issue becomes one of estimation. The best way of approximating what needs to be built is to have records about units you've built before, in order to help you scope the new job. Size estimation is a critical discipline. It represents a team's commitment as to what it will build. As Ed Yourdon once said:

Studies by the SEI indicate that the most common failing of Level 1 (ad hoc) software organizations is an inability to make size estimates accurately. If you underestimate the size of your next project, common sense says that it doesn't matter which methodology you use, what tools you buy, or even what programmers you assign to the job.

"NEW" JOURNEYS: SIZE METRICS FOR OO, CLIENT-SERVER, INTERNET APPS, AND OTHER DOMAINS

Whenever software developers begin a project that involves a new technology (OO, client-server, etc.), there is great confusion as to how and what they should measure and what the appropriate "size" unit might be. The software development community has pondered questions like these since what seems to be the beginning of time. You name the technology, the language, the era. These questions often come down to size. Time we understand. Effort we understand. Defects we understand (yet barely anyone tracks them or keeps good records!). That last measure -- size -- is often where all these questions lead.

For object-oriented development, useful measures of size have been shown to be units such as number of methods, objects, or classes. Common ranges seem to be about 175 to 250 lines of code (C++, Smalltalk, etc.) per object. Lines of code, function points, classes, objects, methods, processes, programs, Java scripts, and frames all represent various abstractions of system size.

Leveraging these size units means taking an inventory of past projects in order to better understand the building blocks of past systems. This also establishes the vocabulary of the organization in terms of the functionality that has been, and needs to be, built. It is the basis of negotiation when trying to decide what a team agrees to take on, within a given deadline.

Have Function Points Lived Up to Their Promises?

Of all the proponents of different size metrics, OO or otherwise, the "priests" of function points have been the most insistent in promoting that measure as the most important offering for all the altars of the world. So, have function points lived up to their promises? It depends on whom you ask.

In many organizations, function points seem to have served a useful purpose. Organizations are finally getting people to think about project size. It used to be that you'd ask the question, "How big is this application?" and someone might answer, "150 man-months," answering your question on size with a number for the effort (related to cost) spent building it.

That's like someone asking you, "How big is your house?" and you answer, "$250,000." You should have said something like, "It's a four-bedroom colonial with 2 1/2 baths, living room, dining room, family room, and den (10 rooms), for a total size of 2,400 square feet." High abstraction unit: rooms; low abstraction unit: square feet.

Size is a metric describing the bigness or smallness of the system. It can be broken into chunks of various descriptions. Function points can do this in certain applications. As previously mentioned, other units include programs, objects, classes, frames, modules, processes, computer software configuration items (CSCIs in the world of DoD), subsystems, and others. All represent the building blocks of the product from different levels of abstraction, or perspectives. They all ultimately translate to the amount of code that becomes compiled and built to run on a computer or embedded processor.

They translate down to a common unit just as the volume of fluid in a vessel might be described in terms of liters, gallons, quarts, pints, and ultimately, down to a common unit of either fluid ounces or milliliters. The key point to understand, though, is that all these units relate to each other in a proportional, scaling relationship (i.e., 32 fluid ounces per quart, four quarts per gallon).

So when questions like, "What new metrics should we use for . . . ?" arise, gravitate to what would make sense in the organization's vocabulary, if it comes down to project size. Remember that the objective is to communicate information about the system and maximize the flow of information by making sure everyone speaks a well-understood and familiar language. Have language fit the organization, not the other way around.

Challenges in the Function Point World

For many DP/MIS organizations, the underlying concept of function points is a decent fit. That is, the metamodel of a system comprising two parts, a database structure and functions that access those structures, correctly describes what they build. In this world, the latter functions comprise Create, Read, Update, Delete (CRUD).

However, as Simon Moser and Oscar Nierstrasz rightly observe, "You have a problem if your system does anything other than CRUD" [7]. That's not necessarily saying anything bad about the function point, other than that it simply is not a size metric for all systems. Engineering applications, factory automation, process control, and real-time systems are not IBM back office-like mainframe systems that smoothly fit into the five function-providing elements.

Attempts to massage function points into things like feature points, data points, task points, and other points have therefore not been widely accepted. So we'd also expect that any "object-oriented points," "client-server points," or "Internet points" that might come along in the future would only represent additional efforts to make the CRUD metamodel into something that it simply is not. Not everything is Create, Read, Update, Delete against an underlying database.

Also, organizations have found metrics programs to be very valuable in tracking and controlling projects already under way. (Think of a visual analogy of an air traffic control tower, tracking "projects in the air.") Unfortunately, function points can represent difficult entities to track midstream. In ongoing projects, function point users have reported difficulty in counting "what function points have been built so far." (On the other hand, it's relatively easy for a configuration management system to report how much code has been added as new and what code has been changed.) In order to fill that void, many organizations respond by using alternate size measures for tracking, such as modules built, number of objects or programs under configuration management, number of integration builds, and yes, amount of code built and tested to date.

So whether function points have met their early promise of consistency in counting is up for debate. They may not have proved immune to counting controversy after all. Problems of "fit" with the CRUD metamodel, dealing with changed function points, midstream tracking, and the fact that no two counts seem to come up the same are not uncommon.

Nevertheless, function points serve a purpose as one type of size metric. And if your organization builds applications in the CRUD metamodel, it might pay to consider function points as a size metric.

Remember that any one measure has its limitations. Therefore, utilize multiple approaches where possible and note the sizing relationships (the proportionality) between them.

DEFECT METRICS: IS YOUR AIRCRAFT IN PROPER WORKING ORDER?

No metrics discussion would be complete without addressing the subject of software defects. It is the least-measured entity, yet the one that receives the worst press when software failures occur in the real world.

That should tell us something in and of itself. What we are seeing in terms of "laws of nature" with software defects is the direct correlation of schedule pressure and bad planning. Stan Rifkin, principal of Master Systems Inc. in McLean, Virginia, summarized this once. He asked a particular organization what had been a driving factor behind a 10x reduction in defect levels. The response: effective planning.

This squarely points to the potential for management to influence software quality, both positively and negatively. A chapter title of a new software management handbook speaks volumes: "Managers Control Schedule, and Influence Results Thereby" [9]. In the chapter, the authors address how defects rise due to schedule compression. This aspect of defect behavior is independent and acts in addition to effects due to tools, methodologies, and staff experience. Causality between software project deadlines and defect levels thus places the opportunity for good quality software squarely in our management laps.

Much has been said on the subject of which defect metrics to use. Two categories deserve mention: (1) defects found during system testing to delivery (including severity), over time, and (2) defects reported in the first month (second, third, etc.) of service. The two categories are inextricably related. The latter will enable you to determine software reliability (say, in mean time to defect).

There is much to interpret from this raw data. At the heart of this is causal analysis; that is, finding out the drivers behind defect rates and defect densities and then doing something about them [4, 10].

TURNING ON ALL YOUR INSTRUMENTS

Now that measures exist, with the organization ready to move forward with the best of intentions, the question remains of how to best use the information at hand. In his newest book on information design, Visual Explanations [11], Dr. Edward Tufte, professor of statistics and information design at Yale University, opens by saying that assessment of change, dynamics, and cause and effect are at the heart of thinking and explanation. He further observes that proper arrangement in space of images, words, and numbers requires a strategy for presenting information.

Developers, managers, and users need critical information. When they don't get it, randomness takes over, leaving projects to drift into the dynamics set in motion by tight deadlines, poor estimates, and volatile requirements. This can be due to (1) information not being available in the first place or (2) existing information being misrepresented or buried in the details.

Two-Dimensional, Flatlands Views for a Multi-Dimensional Terrain Problem

A most common information failure occurs if any of the four core measures are reduced into overly simplistic two-dimensional ratios, such as size over effort, or size over time. Some examples of these are function points per person-month, lines of code per day, and so on. Interpretation of these results can be very misleading. Why?

Software project data has revealed that, taken alone, neither ratio has been reliable at giving the proper picture. For example, a project might expend a great deal of work effort to achieve a remarkable schedule. Yet, because schedule is not in the ratio of function points per work-month of effort, that value will be low and will suggest only a nominal achievement.

Moreover, ratios suggest linear relationships, whereas software development has been proven to be nonlinear. This means that you cannot build a project in half the time with twice the people, although ratio math would allow this, in theory. If you ever tried to bake a cake in 30 minutes at 650 degrees, you know that you'd get a burned brick. So if baking a cake doesn't behave linearly, what would suggest that the complex world of software development does, with all its team dynamics?

Using ratio math, we also risk portraying software development as a production activity, like banging out widgets per day on an assembly line or digging a ditch. But creating software is a research and development activity. It involves a team coming up with ideas, and a technical solution, for a problem domain. It involves trial and error, designing some, coding some, testing some, and then possibly reworking it to get it right. Two steps forward, one step back.

Indeed, when we examine curves for code progress, they never behave in a straight line! They follow S-shaped rates of progression, starting off slow during design, maxing out in the middle, and tapering way off during testing, when little coding is being done. What's happening at that time is a mad-scramble effort to get the bugs out, and rework the code, with a looming deadline.

So if you decide to examine ratios of functionality per unit effort, be sure to look simultaneously at functionality per unit time and vice versa. Be aware that they behave inversely and can vary sometimes by factors of 5 or even 10. For example, when schedules are compressed by adding staff, effort goes way up. The ratio of functionality over effort goes way down. But the functionality per unit time goes up, since time is shortened. This happens all by itself due to staff size, in response to varying degrees of schedule pressure. The behavior is independent of productivity and process/environmental issues. In addition, expect to see high defect rates when schedules are compressed in this manner. Therefore, don't forget to track defects.

An additional idea is to convert the SEI core measures into useful indexes, or scales that are calculated. One such index is something known as a software process Productivity Index (PI). The PI presents less risk of dangerous linear interpretation. Best of all, its calculation is in the public domain [6, 10]. The PI concept is straightforward. Higher values represent projects built with less time, less effort, and better quality. It is derived from the SEI minimum data set. Experiment with this and other indexes to see how they might work for you.

Getting a Clear Navigational Picture

Misinformation may also arise due to political pressures, which result in information being suppressed, deliberately altered, or misrepresented. Tom DeMarco has called the latter "Limbaughing the data," which he defines as, "To choose selectively from a body of data those items that confirm a desired result and never mention any that might be construed to confirm the opposite" [3].

This is a challenge about the politics of information (not something to "Rush" into). It is an entirely separate subject needing much greater representation than what is possible here. Let's exclude this to some extent for now.

Let's assume that good information is available. Having metrics truly serve their purpose requires solving the challenge of information design. To that end, a picture is worth a thousand words. Busy managers already at saturation need a visual, high-bandwidth message that transfers large amounts of project and process information quickly and to the point with a series of pictures. Where possible, try to craft metrics into visual media that communicate your message effectively.

Some top-level questions include:

  • Are the most critical projects on track? If not, then where are they headed?
  • What actions do we need to take? (Remember that earlier we discussed asking the question, "What will we do with metrics?")
  • Do our project estimates suggest that we have a high chance of succeeding, or a high likelihood of failing, if we go that route?

Some other process-related questions include:

  • What is our current capability?
  • How do we compare in terms of speed, efficiency, and quality?
  • Is our productivity improving?

In both areas, there is a great deal at stake in the answers, and thus there is often the risk of "Limbaughing the data." However, assuming we have all great intentions, communicating information effectively becomes the metrics management challenge. Tables of numbers, simple two-dimensional ratios of productivity, and chart clutter lose the message.

So the issue becomes one of making evidence and facts "visual," by presenting information with images, color, motion, and so on. As Tufte observes, the essence of quantitative thinking addresses the question, "Compared to what?" Invalid comparisons must therefore be avoided like the plague. And to understand the impact of actions, managers need metrics to understand "cause and effect." Such as, "If we accept this deadline, and we take on building this amount of functionality, how many bugs might still be in the system on the date we'd like to ship?" (The demand for no bugs at all is often an unrealistic demand in tight deadline situations.)

Risk management answers questions such as, "Do we have the odds against us, or for us, when it comes to the deadline? When it comes to having enough people? When it comes to the quality at delivery?"

When the answers to these questions are all negative, you could be betting your job, and your company, as well as the customer. You are pursuing a path of failure, not success. Rethink the strategy.

CASE STUDIES FROM FELLOW FREQUENT FLYERS

Reducing Rework

A major regional Bell telecom company with over 1,500 developers identified an ad hoc software process as the underlying cause of software rework, which drives up costs and lengthens schedules. Its goal of process improvement therefore aimed to reduce rework. Effective use of software metrics was key to that end, particularly better project estimation and control.

Over 100 project managers have been trained in advanced estimation techniques, using models calibrated with the company's own past project data. Senior managers attend regular "learning events" that deal with maximizing use of information for risk management. The CIO has mandated that the company's top 40+ projects all keep track of the four core metrics, particularly size and defects. Information on the top projects is assembled monthly on a "Management War Board." This high-order visual is the basis of a "Software Control Tower," with its automated "virtual radar screens" to identify position, direction, and expected arrival for all incoming software project "flights."

For outsourced projects, vendor claims are being mapped to implied productivity metrics, providing a starting point for contract negotiations. The company is specifying defect metrics and reliability goals, in addition to targets for delivered functionality at reduced costs.

Moving from "Red Light" to "Green Light"

A real estate management software developer stamped out several "software runaways" by getting management indicators for all projects under way. The organization has put size metrics into place for client-server and OO projects built in languages such as Visual C++ and Visual Basic. Independent assessment of all critical projects characterizes whether they are in a "red light," "yellow light," or "green light" condition. Actions are taken on all projects that are yellow or red. These assessments are conducted on a monthly basis for the executive committee. Failing to submit data for analysis is not an option for any project manager.

On projects that are outsourced for competitive bid, all prospective contractors answering requests for proposal (RFPs) are asked to provide size, time, effort, and defect metrics on three or more recently completed projects. Performance indexes are calculated and superimposed on industry trends. The organization asks that project proposals include size estimates in addition to proposed schedules and project costs. Bids are assessed against past projects and industry benchmarks to assess their validity. Contractors with the most credible proposals are chosen. They may or may not be the ones with the lowest bids.

Recording and Learning from History

A Fortune 100 avionics system division with over 10,000 engineers uses its own company intranet to serve as a repository and communication vehicle for project metrics data. This division is one of two leading worldwide developers of commercial and general avionics systems that fly the planes built by Boeing, Airbus, and so on, as well as military aircraft built by Lockheed Martin, General Dynamics, Northrop Grumman, and others.

Project data (including core metrics) and qualitative factors are logged at a post-implementation review. This review occurs when the system is delivered to the customer. At that time, all information is logged while it is still fresh in the minds of the team members. A metrics group services the division to provide data collection assistance, as well as project estimation and control support.

Over 50 historic projects populate the division's database, and it's still growing. Historic data is used to calibrate estimation models to forecast scenarios for new projects and enhancement releases. Internal experts have been certified to provide in-house company training courses on project estimation and control.

Management has articulated goals for the metrics program, which include reducing cycle time, maintaining quality, providing evidence and credibility in bids for new business, and achieving higher CMM maturity levels.

SOME WORDS FROM THE CAPTAIN AS WE APPROACH OUR DESTINATION

Keep Perspective

In a recent article discussing component-based development and object-oriented design, software reuse expert Paul Bassett said that organizations are at risk of being caught in "the forest [that] has grown unduly bushy trees." The same risk applies when identifying and using software metrics.

Keep it simple. Start with the four core metrics of size, time, effort, and defects. Whatever you use for units, do it consistently. Don't reinvent the wheel -- start with established standards such as those articulated by the SEI. Once you have this information in place (for several projects, one hopes), you'll get a bead on what Joe Kolinger of Pacific Bell calls "knowing your capability."

Take it one day at a time. A metrics program is not a "boil the ocean" project, to borrow a phrase from Tim Lister. Start with a small team of dedicated individuals. Maybe it comes down to one or two people being metrics champions at first. Get data on a few projects to start. Look honestly and without fear when taking an inventory of software projects, warts and all.

Ask yourself, What do we want to do with metrics? The answer will likely translate down to managing commitments. What information will you collect? And how will you communicate it?

Obtain Senior Management Support

Metrics confront management issues, such as negotiation of promised functionality and the technical, cost, and quality viability of schedule deadlines. Successful metrics programs are the ones in which a visionary at the top supports the work of project managers and division heads.

If senior management's vision is fuzzy, uninformed, or misinformed, then the captain of the plane has poor instruments and a mucky windshield. Clarify this situation by dealing with management's bottom-line issues. Some of these include managing schedule risk, quantifying the organization's capability, and using metrics to win new business.

A Picture Is Worth a Thousand Words

Converting your numbers into pictures is key to getting out the message. Innovative use of images, color, and motion makes for better communication with less noise. So far, no one has been able to emulate Mr. Spock's "Vulcan Mind Meld," so until then, we have to figure out better ways of transferring high-bandwidth software information among teams, managers, and end users. Edward Tufte's theories on information design [11] are the wave of the future. Read his stuff.

And be careful about simple ratios of product over effort or product over time. There is always the danger that such ratios will show only partial dimensions; they are truly the two-dimensional flatlands in a multi-dimensional world.

Reprinted by permission of United Feature Syndicate

 

Beware the Hawthorne Effect and Heisenberg Uncertainty Principle

It's been said that to observe an event is to influence its outcome. Criticism of metrics can be justified when measures are used improperly. It might indeed be possible for bad metrics and bad data to drive out good metrics and good data. Uninformed people might misinterpret numbers. Hidden agendas can operate within an organization. And so on.

But as Tom DeMarco says, there is a bright side, in that problems like these are indeed treatable with methods that we, as members of the software metrics community, are fully competent to apply. DeMarco gives valuable advice in this regard [3]. These include things such as measuring benefit and measuring for discovery. Other useful ideas are described by Lawrence Putnam and Ware Myers in [9].

By all means measure to make good commitments. Measure to manage risk. Risk is not bad, it is good. Taking unmitigated risks, not managing them properly, and stacking the deck against oneself is bad.

Measure because of the Trucker's Maxim: "Behind every bouncing ball is a running child." This notion applies to software projects as well. Anticipate risk and be prepared. Software metrics should be applied to keep our projects on the side of success. In some arenas, failure is not an option.

REFERENCES

1. Bassett, Paul G. Framing Software Reuse. Upper Saddle River, NJ: Prentice Hall, 1997.

2. Carleton, Anita, Robert Park, and Wolfhart Goethert. "The SEI Core Measures: Background Information and Recommendations for Use and Implementation. The Journal of the Quality Assurance Institute (July 1994).

3. DeMarco, Tom. Why Does Software Cost So Much, and Other Puzzles of the Information Age. New York: Dorset House, 1995.

4. Grady, Bob. Practical Software Metrics for Project Management and Process Improvement. Englewood Cliffs, NJ: Prentice Hall, 1992.

5. Johnson, Jim. "Chaos: The Dollar Drain of IT Project Failures." Application Development Trends (January 1995), pp. 41-47.

6. Mah, Michael, and Lawrence Putnam. "Is There a Real Measure for Software Productivity?" Programmer's Update (June 1990).

7. Moser, Simon, and Oscar Nierstrasz. "The Effect of Object-Oriented Frameworks on Developer Productivity." IEEE Computer (September 1996), pp. 45-51.

8. Putnam, Lawrence. Software Cost Estimating and Lifecycle Control: Getting the Software Numbers. Los Alamitos, CA: IEEE Computer Society Press, 1980.

9. Putnam, Lawrence, and Ware Myers. Executive Briefing: Controlling Software Development. Los Alamitos, CA: IEEE Computer Society Press, 1996.

10. Putnam, Lawrence, and Ware Myers. Industrial Strength Software. Los Alamitos, CA: IEEE Computer Society Press, 1997.

11. Tufte, Edward. Visual Explanations, Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press, 1997.

Lawrence H. Putnam, Sr., is president of QSM, Inc., a software metrics firm that provides tools, education, and consulting for software management. QSM is the developer of the SLIM Tool Suite: SLIM-Estimate, SLIM-Control, SLIM-DataManager, SLIM-MasterPlan and SLIM- Metrics, leading-edge software measurement and estimation modeling tools used by Fortune 500 and government agencies worldwide. QSM methods are used for benchmarking, risk management, and "runaway" project prevention for both outsourced and in-house software development.

Mr. Putnam can be reached at:

Quantitative Software Management
2000 Corporate Ridge
McLean, VA 22102

Phone: 703 790 0055; fax 703 749 3795
E-mail: larry_putnam_sr@qsm.com 
Web site: http://www.qsm.com.

Mr. Mah can be reached at QSM Associates, Inc., Clock Tower Business Park, 75 South Church Street, Pittsfield, MA 01201 (+413 499 0988; fax +413 447 7322; e-mail: michaelm@qsma.com; Web: http://www.qsma.com/.)


Reprinted from Vol. Vol. 10, no. 11 of AMERICAN PROGRAMMER. © Copyright 1998 by Cutter Information Corp., 37 Broadway, Arlington, MA 02174, USA. Phone: (781) 648-8702 or (800) 964-8702, Fax: (781) 648-1950, E-mail: info@cutter.com, Web site: www.cutter.com/itgroup/. All rights reserved.