The 14-Point DCMA Schedule Assessment
There are numerous reasons why a schedule should meet certain quality criteria. Have a look at our blogpost “Schedule Quality – Technical Requirements” for the details. One of the more widely spread tests to assess the (technical) quality of a schedule is the DCMA 14-Point Schedule Assessment.
In 2005, the USA Defence Contract Management Agency trained personnel to assess the quality of – amongst others – contractor’s schedules based on 14 criteria. Today those criteria have become somewhat an industry standard and are incorporated in software tools such as Deltek’s Acumen Fuse and Oracles Primavera P6 EPPM. Although the test has flaws and can have you facing some discussions, the absence of other widely accepted tests has led to this test ending up in many schedule requirements.
The test checks a schedule on 14 criteria and proposes a score for most of the criteria that must be met to pass the test. The scores calculated for a criterion are mostly expressed as a percentage. So, to start, the total number of ‘normal activities’ or the total number of relationships has to be calculated to have a base to compare with.
In this blog, we’ll sum up each of the criteria with a little explanation as per the version of November 2009 (yes, it’s that old), so here we go.
- Missing Logic
The number of tasks without predecessors and/or successors should not exceed 5% of the total number of tasks.
- Dangling Activities
The number of activities with only SS-relationships or only FF-relationships. This is poor practice because delays in the network are not calculated correctly. However, the test doesn’t foresee a fail / pass criterion for specifically this case.
Having the right relationships and the right number of relationships in the schedule is of course important. To be able to produce valuable outcomes of your analysis (risk, delay, what-if, …), you need a network that is a representation of the planned work on the project. Broken logic will make these analyses worthless.
No leads are to be used in the schedule: the number of tasks with a lead (= negative lag) as a predecessor should be 0.
Leads are somewhat logically nonsense. If you have a FS relationship with a lead of 2 weeks, you actually say that the successor can start 2 weeks before the predecessor is completed. Of course, you don’t know when that will be the case and therefore you don’t know with enough certainty when you should start the predecessor. Compare it to asking someone when you have to get off the bus to go to the museum and receiving the answer: “2 stops before I do”.
The total number of tasks with a lag as a predecessor should not exceed 5%.
We are not as strict on the use of lags as we are on leads, but they hide information. This is unfavourable for most parties. Analysis of the critical path becomes more difficult. Just replace the lag by an actual activity with the duration of the lag: this increases transparency.
4. Relationship Types
The number of predecessor relationships of the FS-type should be at least 90%.
Some consensus exists that the use of SF-type relationships should be avoided at all times, but no separate evaluation of this type is included in the DCMA test. Limited use of SS and FF relationship types is allowed, but try to include both predecessors and successors for activities with those types to ensure a closed network. SF-type relationships really mess up your schedule: in case the earlier activity gets delayed, this doesn’t impact the start of the ‘predecessor’ – which is actually the ‘later’ activity.
5. Hard Constraints
The number of tasks with a hard constraint cannot exceed 5%.
No proper definition is available for hard constraint. We can assume that for Primavera P6, this criterion applies to mandatory start or mandatory finish. In such a case the allowance should be 0% in our opinion. There is some confusion on exactly what hard and soft constraints are, and which constraints belong to what denomination. This also depends on the software. It is safe to say that constraints that ‘overwrite’ the logic of the schedule, are to be avoided. You want to keep your schedule flexible and dynamic (for useful outcomes of your analyses) and those constraints reduce just that.
6. High Float
The number of tasks with a total float of more than 44 working days should not exceed 5%.
High float is regarded in this test as an indication for broken logic. If there is a large amount of float present, it is advised to check whether logic is sound and complete. If confirmed, the project might be executed in a shorter period which requires a revision of completion dates.
7. Negative Float
The number of tasks with negative total float should be 0%.
Negative float indicates that the present constraints cannot be met. A schedule is therefore not acceptable when one or more activities have negative float. For the different causes for negative float in Primavera P6, we refer to the blogpost ‘Why do I have negative float?’.
8. High Duration
The number of tasks with a duration of more than 44 working days should not exceed 5%.
This criterion addresses the level of detail of the schedule. The 44 working days is an approximation of ‘2 months’ as per a 7 day per week calendar. The test allows some high-level activities with a duration of 2 or more months, but 95% has to be of a shorter duration.
9. Invalid Dates
- Invalid Forecast Dates
No forecast dates should be earlier than the data date
- Invalid Actual Dates
No actual dates should be later than the data date
Forecast dates are dates ‘in the future’ and should therefore be right of the data date. Actual dates are in the past or left of the data date.
The number of tasks with a duration of at least 1 day with resources should be 100%. However, not complying doesn’t imply failing the criterion according to the DCMA meaning that resource loading is not mandatory.
This criterion tests the level of resource loading under the assumption that if one has chosen to resource-load the schedule, the complete schedule should be resource loaded.
11. Missed Tasks
The number of tasks whose actual finish date is later than their baseline planned finish date should not exceed 5%.
This is actually a project performance test and not a schedule quality test. It is assumed that performance is not good enough if more than 5% of completed activities have an actual completion date which is later than the baseline completion date. A rebaselining might be considered or additional performance measures might have to be evaluated on impact.
12. Critical Path Test
Adding 600 days to a critical activity should extend the end of the project by an approximate number of days.
If different calendars are used in the schedule, one should take into consideration the impact this can have on the result.
If the project completion date is not extended by the same amount of days added to the critical path, one might assume broken logic. You have to bear in mind that using different calendars can also cause a different delay on the project completion. E.g. seasonal restrictions can have a profound impact and make this test much more difficult to interpret.
13. Critical Path Length Index (CPLI)
The CPLI should be or exceed 95%. The CPLI is calculated by dividing the sum of the total length of the critical path and the total float of that path by the total length of the critical path.
This is a test to assess the realism of the schedule. However, if there is no negative float present in the schedule, the CPLI cannot be less than 100%. So, this tests the same thing as criterion 7.
14. Baseline Execution Index (BEI)
The BEI should be or exceed 95%. The BEI is the ratio of the number of tasks that have been completed to the number of tasks that where planned to be completed in the baseline.
This is also a project performance assessment. If the difference between the number of planned and completed activities is too large, a rebaselining might be considered or additional performance measures might have to be evaluated on impact
Value and criticism
The test has great value and some flaws.
Some points of criticism are the use of non-standard terminology and the lack of clear definitions, the arbitrarily defined percentages for failing or passing a test and the use of fixed percentages as threshold values (making a schedule e.g. only fail at the end of the project where the majority of the non-FS relationships where defined). Also, the mix of quality and performance criteria and the lack of nuances in gravity of quality infringements are subject of critique.
These points of criticism seem legit to us. However, in my experience, it seems that schedules that score well on the test are the ones of better quality. Schedulers that are triggered to comply with the criteria will be attentive to the quality aspects. And if schedules fail for some criteria, there might be an acceptable explanation. In the end, the assessment was developed to identify potential problem areas within a contractor’s schedule and this goal seems to be met.