Bidding for a PPP concession is a competitive sport. Winner takes all. Runners up get to take their bid costs home and have nothing else to show for their efforts.
A full paper version of a PPP bid can occupy a metre of shelf space. The punchline in that document is a single number: the price at which the bidder is willing to provide the specified services. That number is the principal output of the financial model, which is submitted as part of the bid package.
The ability to reach that number quickly gives a bidder a significant advantage over his rivals. A model that calculates more or less instantly allows the bid team to consider a greater number of design solutions, and leaves additional time for value engineering, compared to one that takes many minutes to produce a result. It offers the bidder the opportunity to keep polishing his offering until nearer the deadline for submission. It leaves the analyst responsible for the analysis in the position of having repeated it hundreds of times, rather than a few dozen, and gaining a visceral feel for its dynamics and how to optimise it, which is a formidable competitive strength.
In spite of the clear benefit bestowed by a snappy decision-making capability, PPP financial models vary widely in the speed with which they can price a bid, ranging between a handful of seconds and a quarter of an hour. So much so that the procedure for financial close has changed. What sounds like a satisfying ceremony is dominated by the tedious signing of multiple copies of legal agreements, and is usually dramatic only in the moment that is generally taken as the process’s conclusion, the execution of a swap to lock in the project’s interest rate for the life of the financing, or a rate-fixing process for the project’s bonds.
This moment is tension filled, both because it is the only thing standing in the way of champagne and celebration by all present of a deal consummated after years of hard work and weeks of sleep deprivation, but also because it involves intense conference calls with animated traders quoting rates in urgent monotones. In moving markets, the rates they quote can be held open for only a handful of minutes. Some bidders can reprice their models in that time, but others struggle.
This has produced difficulty on numerous occasions, so much so that the practice has changed so that a range of quotes are offered from which the optimal one is chosen, so long as the resulting price falls in a band pre-agreed as acceptable. In one recent case encountered by Operis involving a complex loan aggregation, it was over five hours before the model pricing was settled. The new process is probably good for blood pressures, even if it makes for a less colourful drama.
The model repricing and optimisation may no longer to define the critical path of the swap process, but the other advantages of a crisp model remain. It’s helpful to understand what contributes to performance in these spreadsheets.
No matter where they are trained, modellers learn early in their careers to ensure that their calculations are clean and clear. Some take this objective to an extreme. They repeat, just above each calculation, the quantities that that calculation will draw on. Revenue and costs will be restated above the tax calculation; above the cash cascade; and above the key ratio calculations. The intention is to make easily visible, just a few rows away, the items on which a module is drawing, items that may themselves be calculated in some distant part of the model.
Other things being equal, it is preferable for a calculation’s inputs to be near at hand rather than far away and not immediately visible, but the repetition necessary to accomplish it is not without cost. As Operis has found when asked to rewrite models prepared in this style, they are typically three times larger than equivalents prepared without the same verbosity, and take proportionately longer to recalculate.
Avoid worksheet creep and keep computations together
A similar problem we also see cropping up comes from overuse of one of Excel’s most useful functions, the ability to create multiple worksheets.
This is partly a consequence of Excel’s development history. Early versions of the program provided a single page or worksheet; the ability to handle multiple worksheets only appeared in rudimentary form in Excel 4 in 1992, to be consolidated by Excel 5 in 1993. That this feature was retrofitted rather than being part of the original design is reflected in it still being the case that a model laid out on a few large worksheets will tend to calculate faster than one in which the calculations are broken into a multitude of modules, spread across many worksheets.
Shortcuts lead to long delays: The OFFSET and INDIRECT functions
Excel provides many functions that help the analyst perform standard calculations. Examples are NPV and IRR, the purpose of which are self-explanatory to anyone with a little knowledge of finance.
Among the functions Excel offers are OFFSET and INDIRECT, a means of postponing from design time (when the formulas are typed in) to run time (when the model is calculated) the choice of which cells a calculation should draw on. Some model developers use OFFSET when they want specification of several scenarios to run to be the result of some input from the user.
One way that Excel gains speed is by calculating only those cells that need it as a result of changed inputs. OFFSET and INDIRECT defeat this clever mechanism, leaving Excel obliged to calculate any cells containing either function, or any others that depend on them. Pretty much the whole of a model will depend on them if they are used to select from several scenarios the very inputs to the model.
All uses of OFFSET, and most uses of INDIRECT, can be accomplished by other means that do not have the same performance drawbacks, and their use is forbidden at Operis, for this and other reasons. Replacing these functions can make a model run as much as ten times faster.
Poorly designed iteration
Setting the price for a PPP bid consists of trying some plausible number; examining the model outputs to see if it suits; and trying other prices, higher or lower, until one has converged on the best possible figure. Solving the model in this way can be done manually, by eye, or it can be done automatically by the model itself. Either way, the model is calculated a dozen times or so each time a price is set.
At the same time, modern accounting standards call for various quantities to be recognised in ways that also need an iterative approach to their solution. Forward-looking ratios, debt sculpting, and forward looking reserves can demand iterative approaches too.
What many models do is try a price, and do all the necessary iteration to perfect the accounting, ratios, sculpting, and reserves. Then they try another price, and repeat the process, even if it can be seen early on in consideration of a particular price that the final result has no chance of satisfying the various financial covenants and bidder requirements. The result is iteration within each iteration, which can require the number of model runs needed to set a price to reach the hundreds rather than perhaps a dozen. Combine this with the need to deliver a suite of maybe ten sensitivities to lenders, and the number of times the model is run can be numbered in thousands. It is for this reason that a few seconds of recalculation time, which may not seem very much, can add up to a great deal.
It is possible with care to design approaches to solving models in which pricing and all other aspects are converged simultaneously, meaning that just one level of iteration is needed rather than multiple nested levels. This makes a dramatic difference to the time needed to price a model, particularly at financial close.