When the Honorable Ellen M. Lord, Undersecretary of Defense for Acquisition & Sustainment (USD/A&S) told the Senate Armed Services Committee on Dec. 7 that she intends to demand a higher level of accountability from program managers, you could feel mixed emotions from DoD acquisition professionals. Many are applauding the vocal prioritization on accountability. However, I’m sure struggling acquisition program managers and support contractors, are likely feeling they have a more focused target on their back. There will certainly be other major changes from the former Acquisition, Technology and Logistics (AT&L) office reorganization to two new USD-level offices of USD/A&S and Research & Engineering (USD/R&E). Each will surely be eager to show respective value to the Pentagon in their responsibilities to improve the DoD acquisition process. Particularly, as the DoD continues a focus on DoD business transformation priorities and ensuring that they are acquiring effective defense business systems with capabilities to support those priorities, I’d like to offer some firsthand observations that suggests there still remains a lack of consistency in how we manage that process.
Accountability Requires Consistency
The word ‘accountability’ often comes up in discussions for reforming the manner in which the Pentagon provides oversight of DoD acquisition programs and the vendors who support them. The pundits say we must increase accountability to reduce the risk associated with project success. While I appreciate that sentiment and agree with its importance, I can’t help but wonder how we don’t already accomplish accountability given all the current milestone reviews, assessments, and reporting currently mandated by our acquisition guidance and requirements. My experience leads me to believe that one major factor is consistency. Specifically, there is a lack of consistency in the application, approach, and results of our processes that exist to reduce risk and increase accountability in program acquisition. Programs should not so quickly be given waivers or treated differently for convenience as is presently seen across the landscape, nor should we accept program status quad charts showing all “green” unless they are based on actual program performance data. We need to apply consistent, repeatable processes to acquisition programs if we are to measure and benchmark progress across the DoD. Only by doing so would we legitimately be able to focus on accountability. Only then can we hold program managers accountable, when measures of success are consistent and horizontal across the entire acquisition portfolio.
Data-Driven Approach to Acquisition Program Management
Using quantitative management methods is not new, but we have certainly seen a resurgence in their use and appreciation in recent years. The progressive use of business analytics applied to diverse industrial sectors, including DoD programs, has ushered in an era of expectation for data-driven insights, and with which some leaders and program managers are still struggling to keep up with. You would think that the DoD position as innovator of many technologies over the last few decades would enable its “pole position” for this wave. But, as we watch various DoD Acquisition programs still struggle to meet cost and schedule milestones, it appears that what we are now seeing is an obvious opportunity to explore new or updated program management approaches. DoD initiatives to reach out to Silicon Valley for software/IT best practices are one of several attempts to positively affect the Pentagon’s acquisition process, as they admit it’s time to look to others for new ideas. I would not say the DoD doesn’t currently use data-driven, quantitative methods because I have had the privilege to work with some progressive DoD leaders in this area who are leveraging this approach to change the way DoD does business. However, I would say that there lacks a consistent application across all programs and that the current environment provides an opportunity to achieve the horizontal data benchmarking capability I believe is critical to program accountability.
Using a proven, data-driven approach provides many benefits, including:
- An information-centric approach to program management that aligns with acquisition reform objectives,
- Consistent, credible and defensible information throughout the acquisition life cycle to enable more confident decision-making,
- Risk reduction through more accurate estimates and assessment of software project cost, size, effort and schedule during initial planning, re-planning activities, and ongoing monitoring of actual-to-planned project status, and
- Quantitative insight needed to confidently make programmatic and contractual decisions earlier, weighing impacts of proposed requirements changes, and identifying early requirements risk.
These benefits are realized by applying data-driven solutions throughout the life cycle, such as:
- Appropriate data collection activities utilizing approaches like Goal-Question-Metric to ensure we are focused on relevant data and not drills to collect data just for the sake of saying we are doing it,
- Benchmarking activities that compare performance with DoD and commercial systems,
- Statistical control practices and tools to continuously re-estimate and re-forecast cost, schedule and performance expectations.
- Other estimates and proposals from vendors and critical stakeholders throughout the life cycle, leveraging the insight gained from our own data activities.
Keep Fighting the Fight
One of the main challenges I see is that many of these methods are still commonly seen as only another “black box” activity by a large part of population. Whether due to ignorance or negative experience with uses of data analysis for program management, some just don’t accept the value of having quantitative insights into a program’s health and progress. Even simply using the approach as just other indicator to complement current status quo management methods would add value. I have spent the last 10 years working to transparently evangelize the value of these methods in the most diverse environments. However, for every organization or person I “convert” as a believer, I find two more skeptics, and the cycle continues yet again. Only when we can evolve the DoD acquisition culture to not only appreciate, but necessitate a place for data-driven acquisition management methods across all programs, can we then begin to get serious about a consistent approach to accountability.