Newsletter December 2018

Newsletter December 2017

Newsletter December 2016

Newsletter April 2016

Newsletter December 2015

Newsletter June 2015

Newsletter December 2014

Newsletter August 2014

Newsletter April 2014

Newsletter November 2013

Newsletter July 2013

Launch Announcement

Newsletters Signup
RiskIntegral Banner
Colin Cropley

Colin Cropley

Managing Director
+61 412 031 161
email Colin
visit Colin on LinkedIn

Matt Dodds

Matthew Dodds

Principal Consultant
+61 433 215 324
email Matt
visit Matt on LinkedIn

Robert Flury

Robert Flury

Principal Programmer
+61 403 134 479
email Robert
visit Robert on LinkedIn

Peter Downie

Peter Downie

External Director
+61 412 994 568
email Peter

Newsletter November 2013

Welcome to our second newsletter, the theme of which is Project Drivers. The first article is at a management level and focuses on their value. The second examines Sensitivities as proxies for Drivers, how they are measured and their limitations. The third explains the pros and cons of Quantitative Exclusion Analysis, the definitive way to measure project drivers refined and automated by RIMPL.

We also report some news about RIMPL and round out our first year with seasonal good wishes for you and for a 2014 with more opportunities to be exploited and fewer threats challenging your organisation’s goals!

In this issue:

Probabilistic Project Drivers:

Understanding what drives your project: why it matters

Many have been there: finally at the end of the project, they look back and say “What went wrong?”

Enlightened project owners and contractors add “and what did we learn from it?” Continuous improvement is one of the goals of organisations whose project management capabilities are mature

Unfortunately, it’s true of most major Australian projects in the last decade or so and especially during the resources boom now concluding, that they’re likely to be delivered late, and over budget. Let’s face it, major projects are generally complex and the bigger or more complex the project, the more difficult it is to accurately predict and manage the final outcomes.

Fortunately, in recent years, there have been big advancements in tools and techniques designed to help us make realistic predictions of project outcomes - the quantitative forecasting techniques of cost and schedule risk analyses. But did you know that these tools can do more than just measure the probabilistic outcomes of a project? If appropriately set up and analysed, Monte Carlo models can also help us understand the drivers of a project’s completion and economics, in surprising detail.

Analysing what drives probabilistic model outcomes can reveal underlying logic in a schedule and whether it should be changed (because stakeholders know it shouldn’t be driving). It can also help guide critical decisions regarding risk exposure. For example, being able to quantify the probabilistic impact of weather on a project’s schedule could enable us to decide whether a contract’s conditions represent a palatable level of risk exposure. Similarly, understanding the potential impact of labour rates on a project’s economics might help inform strategy regarding enterprise bargaining arrangements.

Further, understanding what drives the requirements for contingency and the size of those drivers can help allocate and manage that contingency better during project execution. Understanding the composition of contingency can enable allocation of contingency against specific buckets such as quantity uncertainty, productivity, weather uncertainty & risk events. Tracking the drawdown of contingency against calculated allocations enables early identification of variances and assessment of the impact that this might have on the project as it progresses to completion. It may also help identify exposure to the risk of (or need for) variation claims.

In the technical papers that follow, we discuss the traditional ways of measuring model uncertainty drivers via sensitivity analyses, which we then compare to RIMPL’s technique of quantitative exclusion analysis using our Integrated Risk Drivers (IRD) toolset. The benefits and limitations of these approaches will be discussed, as well as the relative advantages of using both approaches in combination.

Probabilistic Project Drivers: Sensitivity Analysis

What are sensitivities?

Sensitivities are measures of correlation or dependence between two random variables or sets of data. Correlation refers to the tendency for the behaviour of one variable (the Independent Variable (IV)) to act as a predictor of the behaviour of another (the Dependent Variable (DV)) in terms of some quantifiable measure.

Sensitivities range in value from +1 through to -1, with positive values representing a positive relationship between the IV and the DV, and negative values representing a negative relationship between the IV and the DV. Sensitivities of +1 and -1 are equally strong, but values that approach 0 indicate progressively weaker associations between the IV and the DV, with sensitivities of 0 indicating no statistically measurable relationship between the behaviours of the two variables.

What do we use them for?

Sensitivities are typically used in quantitative risk management to gauge the perceived effect of one element in a model on another element, or on a key measure within the project outcomes as a whole. Measuring the degree of relatedness between these two variables helps us to establish drivers of key measures within the model that may be positively or adversely affecting model outcomes. It also gives us an indication of the relative rank of each contributor to a measured outcome within a model as is shown in the sensitivity ranked tornado diagram below:


Typically, in Integrated Cost and Schedule Risk Analysis (IRA), we look to measure the degree of relatedness between the changes in one cost element and the cost of the project as a whole, or alternatively, between the changes in a task’s duration and the finish date of a project. However, the selection of variables is not restricted to measuring dollars with dollars, or days with days. In an IRA, it is just as valid to measure the degree of relatedness between changes in a task’s duration and the cost of the project, as prolongation can often be a major driver of project cost. The key assertion here is that the stronger the correlation between changes in the independent variable and the dependent variable, the better the variable can be seen as a predictor of the measured outcome.

Techniques for measuring correlation

Pearson’s product moment co-efficient

The most familiar measure of correlation between two variables is Pearson’s Product Moment Co-Efficient (sometimes referred to as “Pearson’s r”). It is obtained by dividing the co-variance of the two variables by the product of their standard deviations. The population correlation coefficient ρX,Y between two random variables X and Y with expected values μX and μY and standard deviations σX and σY is defined as:

Peasrons Product Moment

A key trait of Pearson’s r is that it is used to measure the degree of linear relatedness in two sample distributions.

Correlation Examples

However, as the bottom row of the accompanying diagram demonstrates, it is possible to have related variables that do not conform to a linear function. Also demonstrated by this diagram (particularly in the second row) is that correlation acts as a measure only of the strength of the relationship between the two variables, not as a measure of the rate of change between them.

Spearman’s rank correlation co-efficient

Rank correlation coefficients, such as Spearman's rank correlation coefficient measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. Conversely, if, as the one variable increases, the other decreases, the rank correlation coefficients will be negative.

The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the ranked variables (where rank is the position number xi in a sorted list from smallest to largest Xi). For a sample of size n, the n raw scores Xi,Yi are converted to ranks xi,yi, and ρ is computed from these. Tied values are assigned a rank equal to the average of their positions in the ascending order of the values.

Spearmans Rank Correlation

Spearman’s Rank Correlation Co-Efficient has strengths over the more common Pearson’s r measure of relatedness, as summarised by the examples below, although the third example – the strong outliers - may sometimes indicate a more complex relationship, such as where low probability, high impact risks may be found in “fat tails” of probability quite divergent from normal distributions:

Spearman Pearsons Comparison Spearman Pearsons Comparison Spearman Pearsons Comparison

When two variables are roughly elliptically related and there are no clear outliers in the distribution, Pearson’s and Spearman’s produce similar results.

However, unlike Pearson’s, Spearman’s is able to accurately measure non-linear relationships.

Spearman’s is less sensitive than Pearson’s to strong outliers in the tails of both samples, because Spearman’s limits the outlier to the value of its rank.

Multivariate Analysis Of Variance (MANOVA)

The limitation of both Pearson’s and Spearman’s is that they consider only the effects of one independent variable on the dependent variable at any one time. This means that where we wish to measure the effects of more than one independent variable on the dependent variable, we would have to perform multiple sets of correlation calculations. A problem arises, however, in that mathematically, these sets of calculations cannot be meaningfully combined to produce a consolidated ranked set of correlated variables. Integrated cost and schedule risk analysis is a perfect example of such a situation, in that it is necessary at once to be able to understand the relative contribution of both task duration and task cost on the measured cost of the project and to be able to compare these two IVs in a single ranked list of correlations. To achieve this, RIMPL extends the sensitivities beyond those natively available in Oracle’s Primavera Risk Analysis (PRA) to include the correlation measure of MANOVA via its stand-alone tool Integrated Risk Drivers (IRD).

MANOVA overcomes the limitations of both Pearson’s and Spearman’s by measuring the effect of two or more independent variables on the dependent variable. It utilises the variance-covariance between variables in testing the statistical significance of the mean differences. In this way, it is able to measure the statistical significance of the changes in the independent variable(s) on the dependent variable in a single measure that can be used to rank drivers of the DV even if they are otherwise measured using disparate base metrics.

Correlation vs. causation: The problem with sensitivities

When dealing with issues of correlation, it is all too easy to infer causation where none exists. Correlation Joke A clear example is the use of Duration Sensitivity as an indicator of activities that drive the project duration. A high correlation or duration sensitivity between an activity’s duration and the project’s duration during a Monte Carlo simulation suggests the duration may be driving the project without proving it. The activity may never be on the project critical path during simulation iterations and the high duration sensitivity may be due to a statistical coincidence. That is why duration sensitivity is multiplied by Criticality to produce Duration Cruciality, a much more reliable indicator of influence over project duration.

This false assumption of causation is especially true when dealing with large data sets from quantitative models such as a schedule for use in IRA, where a set of complex interactions between the elements in the model commonly exists.

The issue here is whether the changes in the IV are actually causing the perceived changes in the DV, or if the two are commonly related through a third variable. In statistics, this third variable is known as an ‘undeclared independent variable’ and it has the ability to significantly alter the calculated values for any type of sensitivities.

Examples of undeclared independent variables in the IRA methodology are the necessary presence of duration correlation between tasks and the presence of cost correlation between resource assignments. In each case, these correlations form an integral part of the model in that they define the extents of relatedness between elements (task durations or resource values) that the Monte Carlo Method otherwise inherently assumes to be completely independent. Adding correlation counteracts the ‘Central Limit Theory’ and prevents unrealistically narrow or ‘peaky’ distributions. However, when looking at sensitivity calculations, the correlation between the assignment of one task’s duration and another task’s duration represents an undeclared and uncontrolled variable, which can invalidate the sensitivity result.

As mentioned earlier, sensitivity calculations act as a measure of only the strength of the relationship between two variables, not as a measure of the rate of change between them. Thus, if a very small cost distribution and a very large cost distribution are 100% correlated via the cost correlation model, this acts as an undeclared independent variable, and the sensitivity for the smaller cost distribution will be calculated as equal to that of the larger distribution when measured for influence on total cost variability.

Thus, we see that while sensitivity calculations are useful measures for gaining some insight into the drivers of measured outcomes in the model, they are inherently flawed in that we can never fully control for undeclared independent variables, especially in large correlated models.

Probabilistic Project Drivers: Quantitative Exclusion Analysis

What is quantitative exclusion analysis?

Given the limitations of Sensitivities explained in the above article, the logical question is then “How can we reliably measure and rank the effects of different uncertainties as drivers of project outcomes?”

In answer to this question, RIMPL produced a methodology we termed quantitative exclusion analysis. The key idea here is that by removing a source of uncertainty from the model, re-simulating, then measuring the outcome by difference compared to the original, we are able to quantify the relative contribution of that uncertainty on the model outcomes.

This approach can therefore be applied to an entire class of uncertainty (eg. “How much uncertainty do all risk events contribute to the outcomes?”), or to a specific uncertainty (eg. “How much uncertainty does risk ‘A’ contribute to the outcomes?”). Furthermore, by grouping uncertainties together in novel ways for exclusion analysis, we can examine the relative uncertainty contribution of, for example, one particular area of a facility, or one contractor’s scope of work, or even a single discipline of work within one sub-facility.

Uncertainty Drivers BreakDown

What are the benefits of exclusion analysis?

Measuring outcomes by difference has significant benefits over traditional sensitivity analysis:

• As influence is measured by difference, the approach is purely quantitative in that it allows you to measure the exact influence of the variable, rather than just an indicator of influence ranking.

• Because this methodology doesn’t need to rely on sensitivity calculations, it’s unaffected by undeclared independent variables. This means that if two highly correlated sources of uncertainty worth $1,000 and $1,000,000 were excluded from an analysis, the exclusion technique would be able to accurately identify the relative contribution of each rather than just the combined contribution of both.

• Because we’re able to specify exactly which uncertainties to exclude, we’re able to cut and dissect the model into informative groups to measure the impact of entire classes of uncertainty, groups of uncertainties, or individual uncertainties. This is especially useful for helping to calculate the allocation of contingency across a project.

• Much like the MANOVA sensitivity analysis, the exclusion analysis technique allows you to rank both schedule and cost uncertainty contributors in a single ranked measure of cost uncertainty.

• By first examining the sensitivity ranked drivers within PRA (the quickest source of ranking information), we are able to estimate which uncertainties are more likely to contribute significantly to analysis results before committing them to exclusion analysis techniques. The order will almost certainly change, but the sensitivities will be a useful guide to identifying which activities and resources will be in the top group.

What are the limitations of exclusion analysis?

The major problem with this type of analysis is that it is, by its very nature, time consuming. For every exclusion scenario that is run, the model must be modified to nullify the uncertainty, then re-analysed, and the effect of the exclusion measured against each reporting node / target. Further, if something happens to force a change in the base model, this invalidates any exclusion analyses that may have already been performed, and the process must be repeated.

Secondly, by changing the inputs to an analysis, this also effectively changes the randomness ‘seed’, so unless a significantly large number of iterations have been run to be able to reach ‘convergence’, the measured difference might in fact be overwhelmed by the inherent randomness of the modelling. The seed in Monte Carlo simulations is the randomness generator whereby each uncertainty is given its distribution sampling instructions. Although instructions could be generated using true ‘randomness’, a seed is typically used as this allows for repeatable results when no changes are otherwise made to the model.

RIMPL’s Methodology: Integrated Risk Drivers (IRD)

To capture all the benefits of the quantitative exclusion analysis technique whilst minimizing its limitations, RIMPL developed a software tool called IntegratedRiskDrivers (IRD). IRD is essentially a batching application that automates and manages scenario sequencing modeling with Primavera Risk Analysis to automatically re-simulate exclusion scenarios that are specified by the user. Such repeated simulations, each of around 1,000 iterations could take several or even many hours for a large IRA model (eg, several thousand activities). By automating the process, the user can queue the simulations and reporting to proceed unattended. However, due to the time-consuming nature of this analysis, it is usually left until iterative analyses and changes have been completed and results have been agreed to be sensible and credible.

IRD covers all aspects of the exclusion analysis methodology including creation of a baseline measurement; exclusion of classes, groups or individual uncertainties; re-simulation; and measurement of the revised outcomes by difference across any specified cross-section of the model. As well as the significant time saving that this toolset provides, it is also able to nullify the effect of seed changes within a model to make the measured changes very precise even when the effects are small.

So if you’re planning or managing a project, why not contact us to see how we can help you identify and manage uncertainties and their drivers more effectively? If you’d like to find out more about the RIMPL’s risk analysis services offering or toolsets, please contact us using the details that can be found in this newsletter.


Since our July Newsletter, RIMPL has performed Integrated Cost & Schedule Risk Analyses on projects collectively worth several billions of dollars, including:

RIMPL’s MD presented at PMOz, a national Project Management conference in Melbourne in August and has been accepted to present papers at conferences in USA and Thailand in 2014.


Risk Integration Management Pty Ltd (RIMPL) & What we do


RIMPL is an innovative risk management and project controls services business. We provide risk management and analysis consulting services as well as skilled project controls and strategic project advisory services and personnel. RIMPL continues services formerly provided by Crescent PSS Pty Ltd (1996 - 2008) and Hyder Consulting Pty Ltd's Project Management Group (2008 - 2012). Through seven years of software methodology development, the risk management team at RIMPL offers the following services:

• The most sophisticated and realistic Schedule Risk Analysis (SRA) modelling available in Australia

• The only true Integrated Cost & Schedule Risk Analysis (IRA) modelling developed in Australia, to assess project contingencies, profitability and viability

• Licensing of the methodology and software suite to approved clients

• Training in SRA using Oracle's Primavera Risk Analysis (PRA) and optionally also using RIMPL's risk analysis software to enhance the modelling realism of PRA

• Sale of and training in selected RIMPL qualitative and quantitative risk analysis applications

Independent of our risk management services, we also offer development of database applications to client requirements.