- By Craig Clee
- In Blog
- Posted 19/09/2017 11:52:00
There is a section in my colleague Steve Hemsley’s recent article ‘Predicting tomorrow, today’ which talks about where to begin when moving towards a predictive simulation approach. The considerations described are really important and investing in a capability is key. Of course the choice of software is important, but it should always be a means to provide an answer rather than being the focus itself. You could use a great tool badly and a bad tool well, so it is essential that good method and process are followed to maximise the return on any investment in capability.
Understand the steps between identifying a problem and providing a recommendation
At Lanner we’ve devised a process that can help formulise this, which we’ve called The Business Decision Analysis Process. Let’s examine each of the five steps individually, each of which can be conducted by individuals, but a group of people with differing, but complimentary skillsets would be best.
Identify – this is where the key business problems are acknowledged – Business Management identify a problem from which questions are derived. All subsequent steps are all dedicated to responding to these questions, so it is important that they are well understood and properly bound. It is important for the group to know what it wants to achieve and the value of the answer.
Take the example of an LNG producer wishing to expand production. Their problem could be:
“We need to know how far can we push production before we invest in more storage.”
A problem leads to a question, or a number of questions which are suitably bound. For example:
- Utilising small liquefaction trains, how many can we invest in before seeing issues with current storage capacities? Due to seasonality, a minimum of 1 year’s production should be tested. Issues will be highlighted by excessive tank tops and missed production. These are our key performance indicators (KPIs)
With the key questions set, efforts can then be made around deciding how to tackle them.
Design – this is a collaborative effort between the subject matter experts (SMEs) and Analysts to ensure only sensible options are considered, thus narrowing the scope of consideration. A list of scenarios should be created on which analysis can be conducted to respond to the questions.
There are a number of scenarios that can be created around each question. However, to start, it is always useful to have a base case scenario from which all others can be compared.
Continuing the example above, scenarios for the question ‘Utilising small liquefaction trains, how many can we invest in before seeing issues with current storage capacities?’:
- Scenario 1: Base case – a representation of current operation
- Scenario 2+: On the base case scenario, add 1 to x small liquefaction trains - find the operational tipping point
When the scenarios to be explored have been created, suitable tools can be identified to run these experiments.
Execute – the Analysts will be best placed for deciding the right tools for the job. With the range of scenarios created for testing, these may be done with one or many tools which may or may not exist. Generally, however, as is the case with Lanner’s LNG Logistics Simulator, the key variables to test will have already been factored into the predictive simulation model; if not, it is likely that customisation of existing tools can be done quickly and easily, to avoid starting something from scratch, which can be onerous.
With the tools identified, the scenarios can be run and analysis performed.
Control – this is a collaboration between Analysts and SMEs to validate findings and confirm outcomes. There are often times where analysis throws up something interesting and maybe unexpected. If further investigation is justified, you can move back into the Design stage to refine options for any extra cases.
Continuing the example above, an outcome of the analysis could be that increased traffic due to increased production puts too heavy a demand on tugs, but a study around tugs wasn’t factored into the items to test. This means that a further study around tugs is required to understand their breaking points.
With the analysis done, it is time to present back the findings.
Present – the final part of all of this is the feedback of outcomes and to make recommendations. However, even though it is an obvious and crucial step, it is not always performed in the best way possible, as it is often left to the Analyst to feedback. The reason this can be an issue is that presentation skills are not the key reason Analysts are hired, so feedback is often done using complicated tables and charts, which means the key message can be lost or misunderstood.
We therefore suggest a mixture of Analysts, Subject Matter Experts and Business Management, where the SMEs can act as a bridge between the Analysts and Business Management to convert the raw outcomes into clear and concise recommendations.
Why is this process cyclical?
It is a cyclical process so that one study can follow on from another enabling more opportunities for problems to be solved.
When establishing a capability, it is important that processes are understood by the business, which will maximise the chances of success. Maintaining a cycle aids this understanding. Keeping a regular beat to these cycles is also important, which could be weekly or monthly, but is largely dependent on the organisation.
For those that are interested in exploring some of these items further, here is a 30-minute webinar I did on best practice for simulation experimentation: