If you have made changes to your model, you probably want to ensure that nothing has been broken and that the model is still working as intended.
An operating model should be designed to be as helpful as possible in this regard. Still, sometimes the financial modeller who built the model hasn’t considered that the people running the model will have little to do with modelling. In the same way, a modeller would need a lot of hand-handling to sign off on somebody’s accounts; a non-modeller needs help to navigate a financial model.
That’s why a clear and user-friendly layout is crucial, and audit checks alerting the user to potential structural issues are very useful.
Below is a screenshot of a checks sheet in an Operis model that we purposely broke. Our tests are a bit unusual as they report what a check failing means in plain English, making it easier for anyone working on the model to identify the issue. We categorise our checks into audit tests and warnings. An audit test that fails means something is wrong with the model; for example, the balances sheet does not balance. A warning that gets flagged implies that something is wrong with the project; for example, it runs out of cash.
A check in the model should be there for a reason, and if it’s failing, it might be flagging something that makes the results coming out of the model meaningless. If you make a change and something fails, then you need to work out what it is telling you, or the check might as well not be there.
How do you decide if you need to take action?
You can ask yourself several questions to help work out if you need to make further changes to the model before you can trust its outputs.
Is this a modelling issue or a commercial issue?
The first question is whether the failure is a structural modelling issue or a commercial warning. For example, many models will flag a warning if the project is forecast to draw an overdraft. If this happens 20 years in the future and is quickly repaid, it may not be something to worry about whether this year’s cover ratios are reliable. You would still want to monitor this test going forwards, as a persistent overdraft developing in the near future would be a cause of concern.
Is the check relevant?
The second question is the relevancy of the failing test. For example, you may have sense checks on inputs for something that is not active (for example, a percentage input profile adds up to 100% on an unused capex amount). Using the OR function or something similar, you can tell the model to make the test pass whilst the relevant item isn’t used.
Is the discrepancy material?
The last question is whether what is causing the test to flag is material. For example, we generally would be quite concerned if the balance sheet isn’t balancing, but if it’s only out by £5, it might not be so much of a worry. You might consider changing the tolerance on the test so it would flag again if the discrepancy became larger.
By removing fails that are not structural, relevant, or material, you should end up with a relatively small number of fails. For those left, you need to investigate further or seek external help in understanding.
If you are interested in adding or amending audit checks in your operating models or need help building one from scratch, our modelling team will be delighted to help you. Contact us to discuss your specific requirements.