Frankly, I could use some help.


The AACEi (Association for Advancement of Cost Engineering International) last year charged me with chairing the process of developing a recommended practice on the subject of Forecasting Based Upon Trend Analyses. This has been designated as RP-50R-06. The paper and slideshow presentation for AACEi’s June 27-30 Meeting are in, but many questions remain.

 

 One area I need help with is determining how new algorithms will distinguish trends and calculate forecasts. You'll see how complicated this can get if you read further down in this blog post.

 

 But first some preliminaries.


The Gantt chart was first introduced in 1910. The Gantt chart (or bar chart) was a vast improvement over “TO DO” lists (previous state-of-the-art) but required a great deal of work to recalculate and redraw if even a minor change to the plan was contemplated or actual duration vary from that initially estimated.


For example, should installation of a underground pipe take four weeks rather than two weeks, depending upon where and the project logic, this may have no impact upon project completion or delay completion up to two weeks. If project progress is controlled by limited resources, the impact may be even greater. So CPM allows an automated means to quickly recalculate the schedule based upon past actual data.


There is a cost. The effort of recording the plan in the proper format is greater than that required to prepare a bar chart. The discipline of preparing a proper CPM precludes “hopeful” or “sloppy” thinking; each activity (other than the first) must follow another based upon physical need (such as gravity or perhaps the dictates of a contract specification) and each activity (other than the last) must be followed by another based upon similar need.


Only then do we consider the optional step of adding additional restraints to allocate limited resources (or allow the computer to do this for us). The mental effort to follow this regimen is the aspect that separates us construction professionals from those of many other fields


We also must do much of the analysis on our own. If actual events vary only slightly from that initially estimated, we may rightfully consider such an aberration and recalculate based upon our other initial estimates. If a subcontractor is showing a trend of slow performance, perhaps we can demand more resources or overtime or other corrective measures.


Or perhaps not. At some point we need to see what will be the impact if this subcontractor continues this trend, and then consider what other members of the project team may do to mitigate. Perhaps another subcontractor can accelerate to make up for lost time – whether to back-charge the first is then a matter of contract and business judgment.


To ascertain trends by subcontractor, craft or other resource, location on site, or other commonality is also left to the intuition or grunt work of the busy superintendent or project scheduler.


But while giant mainframe computers of the 1950s were limited, current laptops have yet unused capacity for mining past actual data for trends and providing alternate forecasts based thereon. All without ANY additional effort by the project team. What we do need however, is to carefully determine how new algorithms will distinguish trends and calculate these forecasts.


Questions arise. Should trend measurement begin when any one activity code (subcontractor, location, etc.) reaches a trigger of 10%, 20%, 30%? Should the trend be based upon all inputs or weight more highly the more recent actuals? If weighted, should a linear or exponential factor be used?


Perhaps we must consider multiple regression analysis based upon not only the one activity code, but all concurrent work on the project? And need we stop at trending actual v estimated durations? Why not also consider trends relating to overlapping activities (lead/lags), resource availability, weather and other calendar considerations?


And how about the degree of out-of-sequence variation from the CPM logic? Even the GPS in my car understands when I have permanently deviated from plan and creates a new plan based on actual progress to date.


That is the final goal of this effort. CPM automated the effort of redrafting the Gantt chart. A proper trending algorithm should automate the re-baselining of the CPM logic. Your thoughts on what is needed and how to do this would be greatly appreciated.


Email me at fplotnick@fplotnick.com. And consider attending our session at the AACEi 54th Annual Meeting this June.

 

Twitter