• Ingen resultater fundet

Sensitivity Analysis

In document The Volvo Way to Market (Sider 91-94)

86 As seen in figure 26, global IPO deal value has increased in four consecutive quarters, leading into 2017.

This positive trend implies that the has an appetite for new listings. Going forward, EY (2016) forecasts that this trend continues and project even stronger activity in 2017.

87 assumptions and estimates made in the DCF valuation seems somewhat in line with the market valuation.

As Pearl and Rosenbaum (2013) points out, even though there might be a difference in the valuation implied by the DCF versus the market valuation method (or any other method for that matter), this does not necessarily mean that the analysis is flawed but rather due to company specific aspects.

11.2 Monte Carlo Simulation

As a way of enhancing the sensitivity analysis, by moving beyond the effects of discrete risk, simulations provide a way of examining the consequence of continuous risk. The simulations will be performed using the Monte Carlo method, as first introduced in finance by David B. Hertz (1964). In general terms, the Monte Carlo method is a computational algorithm that relies on repeated random sampling to obtain a numerical result or to generate draws from a probability distribution. Contrary to a deterministic model (like the DCF analysis), which derives the value based on the most likely estimates, the input variables are entered into the model as a respective statistical probability distribution. Consequently, one of the methods advantage over other models, while it does not provide a single numerical solution to a problem, it does result in a statistical probability distribution of all potential outcomes (Vose, 2000). Put differently, the practical application of allowing key value drivers in the DCF to change simultaneously, as opposed to being fixed, the simulation extends the DCF analysis to handle the input of distribution parameters instead of solely “best-guess” estimates. The first steps of designing the simulating model are determining the probabilistic variables and defining the probability distributions for these variables.

Unlike the what-if analysis, where the number of variables that are changed have to be few and assigned equal probability, there is no constraint in how many variables that can be included in the simulation. However, as Damodaran (2007) points out, defining probability distributions for each and every input is time consuming and may not provide much value, especially if the inputs only have a marginal impact on value. Additionally, including more inputs might cause issues due to correlation between variables, where the option then becomes either to drop input variables that correlate or build the correlation explicitly (Damodaran A. , 2007). Since this thesis is limited to three years of financial data, historical data and cross sectional data yields insufficient and unreliable distribution guidance (which affect the estimation of correlation). Consequently, this thesis will focus on a few variables that have a significant impact on value;

revenue growth, EBITDA margin, intangible and tangible assets as a percentage of revenue (CAPEX) and WACC.

The second step is defining the probability distribution for these variables, which is a demanding and problematic process. According to Damodaran (2007), there are three ways of to go about it: historical data, cross sectional data, and statistical distribution and parameters. Given the issue of insufficient data, as mentioned above, this is become a problematic task. In such instance, the remaining option is picking a statistical distribution that best captures the variability in the input and estimate the parameters for that

88 distribution. Probability distribution can take two forms, discrete or continuous with the difference of the number of possible values that the variable can take on. Within the DCF framework and the incorporation of Monte Carlo simulation, research and business analysts often mention the uniform or triangular probability distribution (for example, see Titman & Martin, 2011; Togo, 2004; French & Gabrielli, 2004).

The two just mentioned distributions are represented in the discrete category, where only a limited set of outcomes is possible (for details on probability distributions see appendix 17).

This thesis will apply the triangular distribution due to its intuitiveness, usefulness and flexibility. There are three parameters that specify a triangular distribution: the minimum possible value, the maximum possible value and the most likely value. The value assigned to each respective input is based on business sense. In addition, unlike the uniform distribution, it does not assign equal likelihoods for all values within the given range nor does it impose symmetrical probabilities around the most likely value (the possibility to skew the distribution). For practical purposes, is therefore assumed that the triangular distribution is a “good-enough” approximation to whatever the real distribution might be, since the most likely value would have been used even in the case of not applying Monte Carlo simulation (French & Gabrielli, 2004).

The assigned probabilistic distribution to each input variable is shown in table 23. As seen in the table, all input variables except for the high growth revenue period is assumed to have a symmetrical shape (skew equal to 0,5, meaning equal possibility on each side of the most likely value). All input values are allowed to vary around the most-likely outcome by plus/minus 2% in absolute terms, with the exception of revenues in the high growth period. This is skewed towards the downside, by allowing values 6% lower and 2% higher than the most likely value. This is done because the high growth estimates represent a scenario where Volvo to a continues to be successful. Before proceeding, the authors acknowledge that this rather simplistic approach to determine how a random variable is distributed according to the probability of it taking a value has its limitations and drawbacks. Yet, exploring and determining the “real” distribution of the variables is a cumbersome task and outside of scope for this thesis. As such, it is important to note Monte Carlo is merely used to challenge the most likely estimates with respect to continuous risk.

Table 23. Triangular distribution assumptions

Source: Own construction

For each simulation, a random outcome is drawn from each predefined distribution to generate unique set of cash flows which is used to calculate the enterprise value (identical DCF analysis except variables are

Skew 0,500 Skew 0,750 Skew 0,500 Skew 0,500 Skew 0,500

Min. 43% Min. 104% Min. 101% Min. 9% Min. 5%

Most-likely 47% Most-likely 110% Most-likely 103% Most-likely 11% Most-likely 7%

Max. 50% Max. 112% Max. 105% Max. 13% Max. 9%

WACC EBITDA

Revenue Stable Growth Revenue High Growth

Intangible's & Tangibles % of Revenue

89 randomly picked from its respective distribution). Figure 27 illustrate the graphical representation of 10.000 unique iterations (number of simulations) with respective descriptive statistics.

The output from the Monte Carlo show that the minimum and maximum value is 6.669 and 67.155 respectively, and have a standard deviation of 5.271. Values are centred between 15.000 and 17.000. Looking at 5th percentile from output reveals a value of 10.389, which according to the simulated model means that there is 95% chance of realising a EV larger than this value. The intrinsic EV of Volvo is in the 42nd percentile, meaning that it is 58% likely to get a higher value. From the results, it can be concluded that there is larger “up-side” than “down-side” of the valuation, meaning it is more likely to estimate a value larger than the fair stand-alone value derived in the deterministic DCF analysis.

Figure 27. Monte Carlo simulation

Source: Own creation

In document The Volvo Way to Market (Sider 91-94)