• Ingen resultater fundet

Portfolio Risk Management

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "Portfolio Risk Management"

Copied!
66
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

Copenhagen Business School Cand.merc.mat

Master’s Thesis Study ID: 85992

Portfolio Risk Management

A study of the effects of risk budgetting

Hand-In Date: November 15, 2019 Pages: 64

Characters: 96.000

Created by:

Sebastian Dahm

Supervisor:

Martin Richter

(2)

Indholdsfortegnelse

1. EXECUTIVE SUMMARY ... 4

RESUMÉ ... 4

2. INTRODUCTION ... 5

2.1INTRODUCTORY CONCEPTS ... 5

Important Models ... 7

Solvency II ... 8

2.2PROBLEM FORMULATION ... 10

2.3DELIMITATIONS ... 12

2.4SETUP OF PAPER ... 13

3. THEORY ... 15

TAILRISK MODELS ... 15

Value at Risk ... 15

Expected Shortfall ... 18

DISTRIBUTIONS ... 20

Normal Distribution ... 20

Students-t Distribution ... 21

STATISTICAL VOLATILITY MODELS ... 22

ARCH ... 25

GARCH ... 26

EWMA ... 26

Multigarch ... 27

PORTFOLIO ESTIMATION ... 29

Nominally Weighted Portfolio ... 29

Volatility Weighted Portfolio ... 31

TEST STATISTICS ... 36

Akaike Information Criterion ... 36

(3)

Arch LM Test ... 37

Kupiec Test ... 37

Sharpe Ratio ... 38

4. DATA ... 39

DESCRIPTION ... 39

ANALYSIS ... 46

5. STATISTICAL ANALYSIS ... 50

MODEL ESTIMATION ... 50

MODEL SELECTION ... 53

6. RESULTS ... 56

7. CONCLUSION ... 62

8. LITTERATURE ... 65

9. APPENDIX ... 66

(4)

1. Executive Summary

(In Danish as per CBS Formalia) Resumé

Udgangspunktet for denne opgave er en investeringsmanager i Forsikring og Pensionsbranchen, der sidder med en portefølje af aktier og obligationer. Denne

branche er underlagt Solvency II reglerne pålagt af EIOPA som sætter store krav til disse virksomheders kapitalkrav vedrørende deres investeringer. I de seneste år er der sket en ændring i tilgangen til portefølje strategier hos denne type af investerings managere, hvor den klassiske kapital vægtede strategi er blevet udfordret af en volatilitetsvægtet metode der vægter ud fra aktivernes risiko. Opgaven søger at sammenligne de to metoder på to væsentlige punkter; hvordan performer de to porteføljer ud fra en

regulatorisk kapitalvinkel, og hvordan performer de empirisk ud fra deres Sharpe Ratio.

Undersøgelsen starter med at introducere data som er 11 serier daglig data af aktie og obligationsindexer. Disse bliver estimeret af tre forskellige statistiske modeller, ARCH, GARCH og EWMA. Alle tre modeller bliver testet med flere fordelinger og resultaterne bliver afprøvet med statistiske tests for at vurdere styrken på de endelige modeller.

Derefter bliver modellerne brugt i en DCC-GARCH analyse for at finde de dynamiske korrelationsmatricer. Disse bliver brugt til at lave den volatilitetsvægtede portefølje som derefter sammenlignes med den kapitalvægtede portefølje ud fra deres Value-at-Risk mål og deres empiriske performance i værdi og Sharpe Ratio. VaR resultaterne bliver endvidere sammenlignet med standardmålet for kapitalkravet i Solvency II.

Konklusionen på undersøgelserne er at den volatilitetsvægtede model har udviklet sig bedst over de sidste 18 år. I forhold til VaR viser den kapitalvægtede model sig at have betydeligt større udsving i forhold til VaR, men begge modeller viser sig i anbefale betydeligt mindre kapitalkrav end Solvency II standardmodellen. Endeligt er der ingen statistiske beviser for forskel mellem de to modellers performance i Sharpe Ratio, og konklusionen er derfor at det ikke kan antages at den ene model performer bedre end den anden.

(5)

2. Introduction

2.1 Introductory Concepts

In the world of investment, one of the first dilemmas introduced is the one regarding the choice between risk versus return. There is much theory written on this subject, and a number of models that aims to describe some sort of relationship between the amount of the one you have to give up to get more of the other. The simplicity of the fact is easy to comprehend; the more risk you take on, the more return you can potentially attain.

The reality of it however, is much more complicated, as it is not always as easy as it seems to find a fitting measure of the risk, or volatility, as risk will hereby also be referred to as. Volatility measures vary in their shape and form, but at the bottom of it all, they all describe the same thing. How much do you stand to lose/gain with a specific certainty? And that is the question that an investment manager deals with on a daily basis.

As an investment manager, you are trusted with an amount of capital with the expectation of investing this capital so that it earns a return. It is the job of the

investment manager to make sure that the collective projected volatility and return of the assets in the portfolio matches the profile of the customer or the company he or she works for. For many, the first question that comes to mind, when looking at a portfolio is: How much does this portfolio earn? It is important to note here, as earlier stated, answering this question is not always as easy to determine as one might think. An asset might be considered to have a certain value, but without a buyer at this value there is no money to be made. This can be specifically true for real estate investments, where the true return is never known till the property is sold again. There are luckily more liquid assets to be found, especially on the financial equity markets, where the price for buying or selling a specific asset is almost always known, because of the stock exchanges all over the world that facilitate the trading of these asssets. Keeping track of these prices over time will leave you with a very precise return of your investments, which you can

(6)

compare to other potential investments. As will be discussed in the data description section of this paper, these are the types of assets that this paper will rely on, as it is imperative to have known prices over time when doing time series analysis. Knowing these prices also allows keeping track of the constant change-off between the return of an asset and the volatility tied to it. A higher risk is assumed to, but does not always come with, higher returns and the skilled investment manager will know how to use time series analysis to give estimations on which assets will deliver the highest future expected returns at each level of volatility and use this information to create the optimal portfolio for the desired amount of risk.

When choosing how to place the capital of a portfolio with the purpose of having very low risk, the obvious way to do this, is to invest heavily in an asset with very low volatility. However, whether this asset is land, gold or stock of a top performing

company, holding all your capital in one or few assets will always leave you with the risk of losing a lot in a certain negative situation for a specific asset. The probability of

obtaining a big loss with a portfolio like this may be very small, but if it happens it will have very damaging consequences for the portfolio. The solution to avoid taking on risk of this damaging scale is diversification, a way to lower the risk of your total investment by investing in more assets. The volatility of the portfolio is reduced and the probability of big losses diminish, since a very specific negative situation for one asset, most likely will have a counter asset that benefits by having a stable price in this situation or even a rising price. By having more assets in a portfolio, a big loss to one asset will have a considerably lower effect on the total portfolio than if this was on of only few assets.

Thus one can diversify out of many possible very damaging negative situations. To create the optimal portfolio with large diversification is however not an easy discipline for several reasons. The main reason is that it is impossible to look into the future. One can only always make a qualified guess from an estimate of what the return and

volatility will look like for different portfolios. This qualified guess has roots in some type of estimation analysis, and this estimation analysis can be done in a number of ways. The easiest and most straightforward way to predict volatility is to use historic analysis to predict forward, whether this is done by simply assuming the volatility

(7)

going back or a more statistically based approach where the historic data is put into a distribution. These ways of estimating volatility are great in their simplicity, but are also often limited in their precision given the fact that many assets show seasonal sensitivity or heteroskedasticity which will be described more in detail later. To adjust for these types of traits in the data, more complicated volatility models have been proposed in the literature, and the focus of this paper will be on some of these more complicated models, specifically models assessing the problem with time varying volatility.

Important Models

Models aiming to account for the variation of volatility you often see in financial time- series, evolved in the 1980’s when Robert Engle first proposed the “Auto Regressive Conditional Heteroskedastic” (ARCH) model that apart from an expected level of volatility takes the error terms of a number of recent periods into account (Ruppert, Matteson, 2015). Later his student Tim Bollerslev proposed the “Generalized Auto Regressive Conditionally Heteroskedastic’” (GARCH) model, that apart from the expected volatility and the portion of error terms also include a level of the previous period(s) volatility into account (Ruppert, Matteson 2015).

The results of these volatility models are often used in combination with another popular risk measure called “Value at Risk” (VaR). In the 1950’s, the later recipient of the Nobel Prize, Harry Markowitz conducted some comprehensive research on portfolio theory. He was the first to suggest that standard deviations and expected returns along with the normal distribution could estimate outcomes of whole portfolios (Jorion 2006). His research inspired many investment managers to examine their positions regarding their asset correlations, expected returns and standard deviations. The presence of financial crisis in the early 1980’s led to many of the big investment firms looking to adapt a risk measure that could take all their assets into account and measure the aggregate risk they were exposed to. When investing great sums of money, the potential damage from a big loss hurts more. Therefore, large efforts were made on measuring the tail-risk of the distribution of outcomes thus looking at Markowitz’s Value-at-Risk. The name of the measure comes from a JP Morgan manager named Dennis Weatherstone who during this time of crisis asked his staff to come up with a number that described the total exposure of all assets the following 24 hours (Jorion, 2006). This number was dubbed

(8)

“Value at Risk” and the derivation of it has then been developed many times since then.

After calculating this number, the investment manager will have an idea of size of the potential loss of the assets that is being held. Deriving a VaR measure demands a series of decisions when it comes to inputs in the formula. It is important to take into account how the VaR measure is to be used, when selecting these parameters as you can get very different measures depending on how you choose your parameters and the data to estimate from.

In 2001 an alternative risk measure to the VaR model was introduced to better account for tail risk. This model called “Expected Shortfall” (ES) delivers the expected loss/gain, given you breach the level of significance (Hull, 2007). The main reason for the

development of this model, was the ongoing critique of VaR not being a coherent risk measure, as will be discussed further in the theory chapter. In 2013, the Basel regulation for banking changed its measurement method to ES and thus the VaR method is falling in popularity (Hull, 2012). VaR is still very much in use however, as some investment firms leave the option of using VaR or ES on the portfolio up to their clients. Also, some of the most important users of the VaR model still, are investment managers in the insurance and pensions business. The regulation these managers have to report to rely on VaR specifics and it is therefore imperative that their portfolios can deliver good VaR measures under the Solvency II regulation.

Solvency II

All Insurance and Pensions companies operating in the European Union are subject to the above-mentioned regulation Solvency II. The European Insurance and Occupational Pensions Authority (EIOPA) sets the guidelines that include three pillars, including calculation of insurance liabilities and capital (pillar 1), a supervisory process (pillar 2) and a reporting process for transparency purposes (pillar 3) (Solvency II, 2019). The latest regulation that came into effect in 2016 had many updates and additions to the Solvency I, but for investment managers at insurance and pensions companies, the most important part of the regulation is the pillar 1 regarding capital calculation. There is an imposed Minimum Capital Requirement (MCR) and a Solvency Capital Requirement

(9)

99,5% confidence over a one-year period. The SCR and MCR are calculated with a standard formula that is a sum of the capital needed to cover the underwriting risk, market risk, credit risk and operational risk. It is allowed for companies to develop an internal model, if this can be verified to adequately predict the risks it faces. If

companies do not comply with their SCR they will receive a warning, and a breach of the MCR will, unless quick action is taken, result in a revocation of insurer’s authorization (Solvency II, 2019). For an investment manager at an insurance and pensions firm it is the market risk part of the pillar 1 that is of particular interest. EIOPA monthly delivers a calculation of a standard index measure for the risk regarding the different asset classes that are of interest. The more volatile the asset class, the more per cent of the capital invested in the asset class must be kept in the “Own Funds”, which is liquid cash or other ready at hand funds. For stock and bond investments of the type used in this paper (type 1 investments in the Market Risk part of the pillar 1), the calculation of the SCR looks as follows(Solvency II, 2019):

𝑆𝐶𝑅= 39%+ 1 2

𝐶𝐼−𝐴𝐼 𝐴𝐼 −8%

Where CI is current index, and AI is average index, that both refer to the index provided monthly by EIOPA, an index consisting of some of the largest indexes at a global scale.

The second part of the equation is a symmetric adjustment to avoid extreme reactions in the market, following losses. The reverse reaction of the symmetric adjustment, allows for companies to have a smaller SCR when the CI is lower than the AI. It is to be noted that the symmetric adjustment is limited to a +/- of 10%. As this standard model clearly is very simple and hence not necessarily very accurate, it may be worth for an

investment manager to have an internal model that gives a more precise, and possibly lower, 99,5% VaR measure, hence having more capital to invest, as less capital will be tied up in the “Own Funds”.

(10)

2.2 Problem Formulation

The PFA Holding Annual report for 2018 reports to the Solvency II regulation as the company falls under the EIOPA authorities as an Insurance and Pensions company. PFA reports on the performance of their equities, as well as bonds and several other

investment areas that affect the financial performance of the company throughout the year (PFA, 2018). This paper intends to focus on the decisions that an investment manager at PFA Holding faces regarding not only investment strategy in the light of financial performance, but also in the light of the regulation that must be met. A specific goal of the paper is to make the examination of the problem as realistic as possible in order to deliver answers that hold up in reality and could have a direct impact if applied at PFA.

The decisions regarding regulation for an investment manager at a large insurance institution such as PFA are tied to the Solvency II regulation mentioned in the introduction. First of all, it is assumed the investment manager complies with this regulation, which means that all capital invested will lead to a required amount of capital in the “Own Funds”. Because of this demand on the size of the “Own Funds” the regulation imposes on the company, regarding the amount of risk it takes on in its investment positions, it is of great importance how volatile the portfolio is that the investment manager puts together. The SCR mentioned earlier, demands the portfolio manager sets aside enough capital to cope with losses at 99,5% confidence over a one- year period. This means, when the portfolio is expected to be highly volatile in the next year, more capital will have to be set aside for the “Own Funds”, and therefore less profit is earned from the returns on the portfolio as less total capital is invested. It is also of interest for an investment manager at PFA to examine, whether standard model imposed by Solvency II accurately enough captures the 99,5% VaR over one year. For internal use, but also for reporting use an internal model, if proved to be more accurate, could drastically lower the SCR that the manager faces.

Given the amount of capital an investment manager at a firm such as PFA has to invest, it is assumed that the simulated portfolio used in this paper should be highly diversified

(11)

will be in both stock and bond related assets. The two investment strategies examined in this paper was chosen after discussions with the supervisor. These discussions indicated at as a “Nominally Weighted” (NW) portfolio has been widely used in the industry for a long time, several companies are shifting to a “Volatility Weighted” (VW) portfolio to achieve better results. In a NW portfolio the holdings of the different assets are set to a certain percentage regarding the value of the capital invested in the asset relative to the total amount of capital invested. A VW portfolio on the other side, sets a predetermined desired level of volatility, and spreads the relative volatility to the assets in the portfolio using the same percentages as for the NW portfolio. It is interesting to examine the empirical performance between these two strategies from an industry standpoint, as it is two drastically different ways to invest. The calculations of the two strategies will be discussed further in the theory chapter.

There are several possible options regarding the reinvestment and readjustment strategies for a manager in this position, ranging from readjusting positions frequently, to very long holding periods for certain assets and there has to be a decision as what to do with dividends from the stock related assets. To avoid unrealistic and expensive amounts of frequent readjustment transactions, it has been determined (after further consulting with the supervisor of the paper) that the first day of trading of each month will be the readjustment day. It is determined to be a reasonable time period as 12 readjustment days in a year keeps the transaction fees fairly low, and one month intervals sets a limit to how far away the assets can move from the proportional levels intended. It is further assumed for simplicity that there are no transaction costs and that all dividends will be reinvested, as gross returns are used. The upside on monthly

readjustments is that it limits the amount of transactions made in times of high

volatility. However, with a monthly readjustment period, you have to accept that if the market is highly volatile, your levels can move far away from the intended levels over the course of a month.

(12)

The three questions this paper intends to answer within this framework are as follows:

• Derive a statistical model to estimate the volatilities of a Nominally-Weighted portfolio and a Volatility-Weighted portfolio.

• Describe the results and empirical performance of the two portfolios from a regulatory angle after using the volatility measure attained in the first question.

• Describe the results and empirical performance of the two portfolios regarding their Sharpe Ratios.

As these three questions are answered, the result will deliver a suggested model for internal and reporting use, a clear recommendation as to whether this internal model would put a lower the demand on the companies “Own Funds” than the demands of the standard model, and an answer to which approach is better regarding Sharpe Ratio, when looking at the choice between a NWP or a VWP.

2.3 Delimitations

As these questions are examined it is important to keep in mind that the conditions in a paper such as this are not always identical with the real world. A portfolio manager is not in reality bound by a monthly deadline to readjust his portfolio or limited to the assets chosen in this paper. Given the limited nature of a research paper such as this, it has been determined that the theory and analysis of optimal asset selection is left out at this time. It is a subject on which there has been conducted many research articles and therefore, it is considered to take away to much focus from the real focus of this paper which is the analysis of volatility estimation and comparison of the choice between weighting you portfolio nominally or to weight it using a volatility measure. This comparison along with the consequences the results will have regarding regulation are the main focus and therefore, the assets that have been chosen for the paper and the weights for both the nominal and volatility-weighted portfolios have been decided through discussions with the supervisor of the paper. These discussions have led to the choice of using stock indices and bond indices from around the globe to assure that the portfolio is properly diversified, as one would assume portfolio of a considerable size would be.

(13)

The analysis of the volatility modeling has also been limited to three predetermined models. The first is an autoregressive conditional heteroskedastic model (ARCH), the second an extension to the first model which is a generally autoregressive

heteroskedastic model (GARCH) and the last one is an exponentially weighted moving average (EWMA) model. The subject of fitting volatility models on time series data can be the subject of its own research paper, but to maintain the focus on the comparison between the portfolio options previously determined, this process has been limited to three that vary considerably in complexity. These three models have been chosen since they have been widely used in time series research, with the GARCH model however limited to a GARCH (1,1) version, which is the simplest version of this specific model.

The idea is that the three models should generate different accuracy due to their different complexity. The three models are limited to be tested with the normal

distribution and then the students-t distribution, as the t-distribution allows for a more specific fit for the fatter tails often seen when fitting financial time series data. Regarding the monthly adjustment of the two portfolios, any transaction costs tied with the

rebalancing are assumed to be of equal size and will therefore for simplicity be ignored.

Another limitation is the scaling of the daily VaR to a one-year VaR by multiplication of 252. Theoretically it makes sense, however one cannot assume the same losses to occur for a full year. The closer the estimation period is to a one year period, the more precise the one-year VaR measure will be. The calculation of daily VaR levels and scaling was made because the data was limited to 18 years. Lastly in reality when estimating the parameters for a statistical model, you do not have information beyond current days information. In order to base the estimations on enough data points, the parameters in the GARCH model are assumed to be constant over long periods of time, and estimations are therefore based on the whole dataset and the parameters are used as constants from start to finish.

2.4 Setup of Paper

The abstract sums up the essence of the findings of the paper in short and delivers some perspective on the results. After the abstract, the introductory part of the paper is meant

(14)

to introduce the reader to the world of an investment manager, and to introduce the main models and regulation that the paper relies on. This part also introduces the reader to the problem formulation, and the framework that the questions in the formulation follow. It also in short lists the main delimitations.

The theory chapter introduces the volatility measures VaR and ES and the two

distributions used in the estimations (Normal and Students-t) It then follows up with a description of the ARCH(1), GARCH(1,1) and EWMA models. The chapter then describes the DCC-GARCH model that is a multivariate GARCH model used in the formulas for the NW portfolio and the VW portfolio. Finally the chapter explains the different statistical test used to evaluate the models, in specific the Ljung-Box Test, ARCH LM Test and Kupiecs Test. The next chapter introduces the data used in the paper, and how it is treated before use, in the statistical analysis chapter that lead the reader through the process of first estimating the models, through the testing of these models to then estimation of the volatility for the two portfolios. Finally the Sharpe Ratio is attained for both portfolios and they are both evaluated against the standard model SCR.

Finally in the discussion conclusion chapters, the accuracy of the results are discussed along with the interpretation of these and a perspective is offered on a broader scale what the ramifications of the finding are. It is also discussed how the examination of the problem could have improved and where in the statistical estimation one could obtain more precise parameters.

(15)

3. Theory

There have been many measurements of risk to financial assets and portfolios over time, from the 1950’s and forward however the research and use of these measures have increased significantly. In this section two different risk measures will be discussed, followed by the description of the statistical models that will be used. The process behind the creation of the two portfolios is then introduced before lastly the statistical tests used to analyze these models and portfolios will be explained.

Tailrisk Models Value at Risk

Value at Risk (VaR) is a risk measure, which estimates the maximal loss/gain expected at a certain probability given a certain time period decided by the desired significance level 𝛼. If e.g. an investor has a 1-year VaR of $5 million at 5% 𝛼 -value, this tells him that there is a 5% probability he will lose more than $5 million in the next year. Or looking at it another way, there is a 95% chance he is going to lose less than $5 million in the next year. Mathematically the formula looks as follows (Jorion, 2006):

𝑉𝑎𝑅! 𝑋 =−𝑖𝑛𝑓 𝑥∈ ℝ:F! 𝑥 >𝛼 𝑉𝑎𝑅!!"# 𝐹!!!(1−𝛼)

Where F! 𝑥 is the cumulative distribution function of X. VaR represents the lowest value X can be, to have the p-value be larger than 𝛼. If the distribution is continuous, the formula can be simplified to the second formula.

VaR assumes the composition of a distribution regarding the outcomes, and the tails of this distribution is where the model finds its answers.

(16)

VaR will deliver the value of the exact point between the 5% lowest percent and the 95% that is above. VaR can be done on either side, or on both depending on what one wants to examine. There are two types of VaR, non-parametric and parametric. The non- parametric VaR does not assume any mean or standard deviations for the distribution as it is based on empirical historic observations. The strength of this type of VaR

distributions is the fact that the reads are based on empirically observed values; in retrospect the perfect VaR looking back in time. The problem with this type of VaR is that it is often hard to get a big enough sample size for the distribution without using very old observations that are both the less relevant to the current situation or at least considerably less relevant than the more resent observations. A large loss that happened 2 years ago should not be given the same consideration as a loss that happened 2 weeks ago.

The parametric VaR assumes a statistical approach to the composition of the

distribution. It uses historical observations to create a mean and standard deviation and this will deliver a more robust measure over time if the distribution used is close to the true nature of the loss/gain distribution. This way you still use history to predict the future, without the assumption that history will actually repeat itself. Without a doubt, the parametric VaR methods see the most use and especially the normal distribution is very popular because of its simplicity. The advantage with value at risk is that it is easy to estimate and easy to comprehend. It works for single assets and it works for multi asset portfolios, and can be a good indicator for the total risk exposure in certain

situations. A disadvantage of VaR is that it does not tell the investor anything about how

(17)

big the loss will be if the desired probability is breached. The investor in the earlier example might know that he is going to lose more than $5 million with a 5% probability, but it might also be important to him to know if he can expect to lose up to $10 million or $100 million if he breaches the 5%. If the distribution is not normally distributed and the true tails are in fact larger, there can be a big difference in what he expects to lose if he breaches the 5%.

To create a VaR measure, three decisions have to be made. First the holding period of the asset or portfolio must be decided. Depending on the type of portfolio this can vary from anything between a day and a year. For some portfolios that are less liquid, it would make sense to have a longer holding period to imitate the difficulty of liquidating the asset. It can however be difficult to get enough data to make precise estimation if the holding period is long. With a holding period of a year there will be a very limited

number of observations to draw from, thus leaving the estimate with a large amount of uncertainty. In a situation where a 1-year VaR is needed, a shorter period VaR is often calculated and then scaled up to the desired time period. This of course limits the accuracy, as one cannot assume that the shorter time predictor is accurate on a larger scale as it is estimated for the short term. The second parameter that must be chosen is the observation period. This is the period being used to estimate from, and it is

important to have enough data to draw from in order to get more accurate results.

Another advantage by having a large data set is that there is a good chance you will have more observations from times of recession when the largest losses are usually seen. This will give a more realistic idea of the worst-case scenario. The last parameter is the

confidence level. It explains how frequent you will realize a loss at a certain level. A confidence level of 5%, as mentioned earlier, means that there is a 5% chance your loss will be bigger than the VaR measure. This parameter is also very situational and the investor must determine what certainty he needs a measure for. For regulation a very high confidence levels around 0.5%-1% is often required where internal models at companies can often lead to confidence level around 5%. Despite the fact that VaR is not a coherent risk measure, as will be explained further in the next section, it is still VaR that will be used in the calculations in this paper. This is done as it is the Solvency II regulation that the investment manager we are simulating has to follow, and it will

(18)

therefore be less complicated to compare results with the standard model if VaR is attained.

Expected Shortfall

Expected Shortfall (ES) is a risk measure very similar to VaR. To acquire a measure from ES a 𝑞-value, a time period and an observation period needs to be determined. If e.g. you have a 1-year ES of $5 million at 5% 𝑞-value, this means that if in the worst 5% of your probability, your expected loss over the year will be $5 million. As it is so closely linked to VaR, it is also known in some research papers as “Conditional Value-At-Risk”, CVaR).

It is important however to note the major difference; that ES takes the shape of the tail into account and thus delivers an expected loss given you are in the worst 5%. The mathematical formula for ES can be described as follows (Hull, 2007):

E𝑆! = 𝐸 −𝑋𝑋≤ −𝑉𝑎𝑅! 𝑋 𝐸𝑆!!"# = 1

𝛼 VaR!(𝑋

!

!

)𝑑𝛾

Which means ES is equal to the Expected value given X are below the 𝑉𝑎𝑅! for X. This can also be written for a continuous function as seen on the right side. The ES formula can be simplified to the second formula under the condition that the distribution is continuous.

(19)

As the above historical non parametric distribution illustrates, ES or CVAR is the loss you expect, given you are past the VaR threshold. ES will therefore always be a more extreme value than VaR. ES is often preferred to VaR as it satisfies the properties of a coherent risk measure, which the VaR does not. The four properties of a coherent risk measure are monotonicity, sub-additivity, homogeneity and translational invariance.

Monotonicity demands that if all scenarios of portfolio A is better than portfolio B, then portfolio A should always have less risk. Sub-additivity demands that the risk of a portfolio consisting of portfolio A and portfolio B can not be greater than the sum of the separate risk of the two portfolios. This is also known as the diversification principle.

Homogeneity demands that if you double your portfolio, then you double your risk.

Finally, translational invariance demands that if you add a sure profit to a portfolio, then the risk of the portfolio will be reduced by that amount. VaR violates the Sub-additivity property as you can construct scenarios where the risk of a portfolio is greater than the sum of the individual risks in the portfolio. This will result in the investor refraining from diversification. Under certain circumstances however, VaR can uphold the sub- additivity property, but VaR will be used in this paper as it, as earlier noted, used for the standard model of calculation regarding the Solvency Capital Requirement.

(20)

Distributions Normal Distribution

The normal distribution, also known as the Gaussian distribution, is the most commonly known distribution with its bell shape when plotted in a probability density function. It is a function with parameters 𝜇 𝑎𝑛𝑑 𝜎!, mean and variance respectively that will help you determine the probabilities of the observation obtained (Jorion, 2006):

𝑓 𝑥 𝜇,𝜎! = 1

2𝜋𝜎!𝑒!(!!!)

!

!!!

As can be seen below, a strength of the Normal distribution is the simplicity of

calculations around +/- standard deviation. From 68%, 95% and 99% the further away from the mean you are the larger the probability that a random variable will be within your chosen standard deviations.

(21)

The normal distribution describes many populations, (thereby the name “normal”) because of the “central limit theorem” that states that large samples will converge to the normal distribution when enough sample points have been taken. Random variables are also often described with the Gaussian distribution, and in the models used in this paper this distribution will be on of the two that is tested.

Students-t Distribution

Another widely used distribution is the t-distribution. It is a distribution that looks a lot like the normal distribution, but has an extra parameter (degrees of freedom = v), which determines the exact shape of the distribution. In most cases when the t-distribution is preferred over the normal distribution in financial data, it is often a more flat version with fatter/longer tails. This means the stochastic part of these assets show more risk than assets that fit a normal distribution, as fatter tails means more probability away from the mean. However at large degrees of freedom, when the t-distribution has close to v=100, it can be estimated with the normal distribution as the two are very similar at this point. The t-distribution looks as follows (Jorion, 2006):

𝑓 𝑥 𝑣 = Γ 𝑣+1 2 Γ 𝑣 2

1 𝑣𝜋

1 1+𝑥!

𝑣

!!!!

Here v is the degrees of freedom which has to be a positive integer and Γ(•) is the gamma function. As can be seen below, the degrees of freedom allow for a flatter shape of the distribution.

(22)

Both the normal and the t-distribution will be used in this paper when estimations are made in models including a stochastic module.

Statistical Volatility Models

When estimating volatility over time, many models are using past data to do so. The simplest version to acquire an estimate is to simply take the standard deviation from the desired observation period. The standard deviation formula for samples look as follows:

𝜎= 1

𝑁−1 (𝑥! −𝑥)!

!

!!!

Here N is the number of observations and xi is the i’th observation. The complication of using standard deviation of prior observation periods is that it assumes constant volatility in the future observations. This assumption does not hold for most financial time series data, especially the type of data used in this paper, as will be shown in the

(23)

information is more relevant and therefore a “Moving Average” model (MA) of the observed volatilities over a determined time period would be a better way to estimate the future volatility. The formula for a MA looks as follows

(20)

:

𝜎!= 1

𝑀 (𝜎!!!)!

!

!!!

Here M is the number of observations included in the estimation and 𝜎!!! is the

observed volatility. Notice the similarities to the standard version of standard deviation, as both are a sum of squared deviations divided by the amount of observations taken into account. In the MA the M allows for a shorter or longer period to estimate from.

This model gives the option to exclude observations that are not recent enough to be included in the estimation. The downfall however is that if a small number of

observations are used, then a big observation will have a big impact on the estimation both when it enters and when it leaves the calculations because it is out of date. This will cause jumps in the estimation that seem unrealistic. If a large number of observations are used, the variance of the volatility estimates will indeed “smooth out” as single observations will no longer have the same impact. Below is an example of the moving average of the S&P 500 daily value. Here at 50-day MA is used as an estimate for the value of the index. The level is indeed smoothed out, but it is noticeable that it is constantly over or under estimating. This could be because too many observations are taken into account.

(24)

However, it is still unrealistic to assume that non-recent observations should have an equal impact on the volatility estimate as the most recent observations. To avoid an estimated average where all observations have the same impact like the MA, the

following three models has been chosen to be examined in this paper. They use the same principle as the MA, as they estimate the volatility tomorrow based on the volatility observed today. They are however more complicated as they are intended to account for the volatility clustering observed in the time series. This volatility clustering will be described more in detail in the data chapter.

(25)

These more complicated models are all three based on the same concept regarding the estimation of returns this estimation looks as follows:

𝑟! =𝜇+𝜀! 𝜀! =𝜎!∗𝑧! 𝑧!~𝑁(0,1) 𝜎!! = 𝐸 𝜎!!!!

The returns are build from a mean constant and and error term, that a product of volatility and a Brownian motion. This Brownian motion will be the two distributions discussed earlier and the estimates for the volatility will come from the statistical models, with all calculations carried out in the statistical software R.

ARCH

The “Auto Regressive Conditional Heteroskedasticity” model (ARCH) is a volatility model that is commonly used in time series data that exhibits autocorrelation. It describes the volatility using the error terms of a number of previous observed

volatilities. By using q lagged values of the squared error terms in the estimation along with a constant, it assumes that volatility has a latent long term volatility that then is adjusted by the recent q periods of deviation. The error terms follow a stochastic process, zt, combined with the period specific volatility. The general ARCH(q) model looks as follows (Ruppert, Matteson, 2015):

𝜀! =𝜎!𝑧!

𝜎!! =𝛼!+𝛼!𝜀!!!! +⋯+𝛼!𝜀!!!!

a0, a1,… , aq are estimated parameters and and 𝜎!! is the estimated volatility for the upcoming period. The model effectively accounts for recent changes in volatility, as it assumes that a high deviation from the estimated volatility in recent periods, will lead to high estimated volatility in the following period. However, if volatility is estimated

(26)

correctly, this will lead to lower error terms, and the estimated volatility will go back towards the long run average a0.

GARCH

The “Generalized Auto Regressive Conditional Heteroskedasticity” model (GARCH) is an extension to the ARCH model that also includes a regressed link from previous periods observed volatility. Like the ARCH model, the error terms in the model are a product of the volatility and a stochastic variable. The most important difference from ARCH to GARCH is that GARCH models for heteroskedasticity in the data set, and therefore work better on data that exhibit volatility clusters. The GARCH model also includes a part that relates to the long run average volatility. In this paper the most used version of this model, GARCH(1,1) will be used and it looks as follows (Ruppert, Matteson, 2015):

𝜀! =𝜎!𝑧!

𝜎!! =𝜔+𝑎𝜀!!!! +𝛽𝜎!!!!

The three parameters should have the following connection 𝜔 = 1−𝑎− 𝛽 , and the long run average volatility that the model is drawn to is (!!!! !! . This only works if the model is stable which it is if 𝑎+ 𝛽 <1. The parameter 𝑎 represents the weighted regression of the returns and the 𝛽 represents the weighted regression of the last periods volatility.

The strength of the GARCH model is that it includes three important aspects of what intuition tells us influences volatility. First it has the underlying long run volatility.

Secondly the volatility will be influenced by the volatility of the period before. And lastly, the deviation from the expectation, the error term of the period before will affect the expected volatility. Although GARCH models are not perfect, they are relatively simple to use and they perform very well on financial time series.

EWMA

The last model that will used to examine the data in this paper is the “Exponentially Weighted Moving Average” model (EWMA), that is a specific variation of the

(27)

GARCH(1,1). It does not have a mean reverting module like the constant 𝜔 of the GARCH, but it has both a part that regresses on the last squared error term and a part that

regresses on the volatility of the period before. The model looks as follows (Jorion, 2006):

𝜎!! = (1−𝜆)𝑟!!!! +𝜆𝜎!!!!

Where 𝜆 is a persistence parameter, that smoothes the volatility. r is the return the period before and 𝜎!!!! the volatility the period before. The specific case of the EWMA model used in the RiskMetricsTM model uses a 𝜆 =0.94 for daily data and 𝜆 =0.97 for monthly data. This has been calculated to be most accurate for financial time series data.

The strength of this model is that it is more easily applicable than the regular

GARCH(1,1) without losing very much of the explanatory power. The weakness of the model is it limits the possibility to tailor the model to the data when it assumes a lambda from the start. It is also questionable that it leaves out a long run average, which

intuitively makes sense for most time series data.

Multigarch

The three models described above are all aimed at describing the volatility for single assets. To estimate volatility for a portfolio of assets using time series data, the main difference that comes into play is the correlation between the assets in the portfolio. The simplest way to estimate volatility in a portfolio is to assume static correlations and use the following formula to attain these correlations and covariances:

𝐶𝑜𝑟!"= (𝑥! −𝑥)(𝑦!−𝑦)

(𝑥!−𝑥)! (𝑦! −𝑦)! , 𝐶𝑜𝑣!" =𝐶𝑜𝑟!"𝜎!𝜎!

Where 𝑥! is the i’th observation of x and 𝑥 is the mean of asset x. The covariances are then used in the following formula for multi-asset volatility; it is an example with three assets:

𝜎!"! = 𝑤!!𝜎!!+𝑤!!𝜎!!+𝑤!!𝜎!!+2𝑤!𝑤!𝐶𝑜𝑣!,!+2𝑤!𝑤!𝐶𝑜𝑣!,!+2𝑤!𝑤!𝐶𝑜𝑣!,!

(28)

Where w is the weights of the different assets. Noticeable in this formula is that apart from the weighted variances in the first part of the formula, the covariances that follow can either decrease the volatility (when negative) or increase the volatility (when positive). To reduce risk, you would therefore want to build a portfolio with negative covariance between the assets. The complication of using this formula on financial data is the lack of stationarity in the data. The many periods where the assets exhibit

significantly higher volatility will lead to a large underestimation of the true volatility in these periods if a static volatility is used. That is why, for financial data, it makes more sense to use a model that assumes dynamic correlations.

Multivariate GARCH models, like the “Dynamic Conditional Correlation” (DCC) GARCH model, will deliver a time-series of covariance and correlation matrices that are dynamic over time. This is ideal for estimating portfolio volatility as you can then use these

dynamic covariances in the above formula. The Multivariate GARCH-DCC model works as follows (Ruppert, Matteson, 2015):

𝑟! =𝜇!+𝛼! 𝛼! =𝐻!!/!𝑧! 𝐻!= 𝐷!𝑅!𝐷! 𝑅! =𝑄!∗!!𝑄!𝑄!∗!!

𝑄!= 1−𝑎−𝑏 𝑄+𝑎𝜀!!!𝜀!!!! +𝑏𝑄!!!

𝑄 = 1

𝑇 𝜀!𝜀!!

!

!!!

𝑄! =

𝑞!!! 0 0 0 𝑞!!! 0

0 0 𝑞!!!

,𝑞!!" =𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙𝑠 𝑓𝑟𝑜𝑚 𝑄!

(29)

Where:

𝑟! is the log returns

𝜇! is the expected value of the conditional 𝑟! 𝛼! is the mean-corrected returns

𝐻! is the conditional variances of 𝛼! 𝑧! is an iid error term

𝐷! is the conditional standard deviations of 𝛼! obtained from GARCH models.

𝑅! is the conditional correlation matrix of 𝛼!.

The input of the model is log return time series and a Garch model for each of these time series. When estimating a DCC-GARCH model with t-distributed error terms, two steps are followed. Initially the likelihood function for 𝛼! is determined using the joint densities of the standard errors and then the log likelihood is obtained. Then the first step of estimation fits the parameters of the univariate GARCH models and obtain the conditional standards deviations in 𝐷!. In the second step, the parameters in the log likelihood are determined to find values for a, b and v (degrees of freedom for the t- distribution) that allows for the calculation of 𝑄! (after a 𝑄! has initiated the process) that lead to the conditional correlation matrix and further the conditional covariance matrix. These dynamic correlation matrices can then, as mentioned earlier by used to calculate portfolio volatility that is not based on unrealistic static correlations. The DCC- GARCH calculations will be carried out using the “rmgarch” package in R.

Portfolio Estimation Nominally Weighted Portfolio

The traditional way of constructing a portfolio is the Capital Budgeting approach, where a pre-determined weight of the total capital invested is allocated to each asset. These weights are then decided after analysis of the returns, correlations and volatilites of the different assets.

The decision on the weights is a study of its own, and will therefore be left out at this time. After dialogue with the supervisor of the paper and a view of the total investments

(30)

of PFA in 2018, the following capital weights were determined for the Nominally Weighted portfolio:

Over time, as the different asset appreciate and depreciate, the relative holdings of each asset will move away from the intended weights and it is therefore important to

rebalance the portfolio from time to time. Below is an example of the 1st 100

observations, and the weight of 6 different assets in the nominal portfolio. The vertical lines describe the time of rebalance, at which the portfolio is back to how it was

intended.

20%

15%

12% 15%

4%

8%

4%

4%

4%

4%

MSCI US 10%

MSCI EU MSCI ASIA US10Y CAD10Y JPY10Y AUD10Y UK10Y GER10Y

(31)

In this paper as mentioned earlier, it has been decided that rebalancing will take place at the 1st trading day of each month and the calculation to rebalance look as follows:

𝒘𝒊,𝟏 =𝒃!!/𝒑!,!

𝑅𝑒𝑏𝑎𝑙𝑎𝑛𝑐𝑒 𝑓𝑜𝑟𝑚𝑢𝑙𝑎 𝒘!,! = 𝒄𝒊,𝒕/𝒑!,!

𝒄!,! =𝑏!!∗𝑃𝐹 𝑉𝑎𝑙𝑢𝑒! 𝒃!,! = 𝒘!,!∗𝒑𝒊,𝒕

𝑃𝐹 𝑉𝑎𝑙𝑢𝑒! 𝑃𝐹 𝑉𝑎𝑙𝑢𝑒! =𝑃𝐹 𝑉𝑎𝑙𝑢𝑒!!!+ (𝒘𝒊,𝒕!𝟏∗𝒑!,!)

!

− (𝒘𝒊,𝒕!𝟏∗𝒑!,!!!)

!

Where:

𝑤!,! is how many units of asset i you have at time t.

𝑏!! is the intended capital weights for each asset (relative to full capital price of the portfolio), this vector does not change and is used when rebalancing the portfolio.

𝑏!,! is the dynamic capital weight for each asset at time t. As prices change this vector is affected. The sum of this whole vector equals to 1. It is these weights that are used when calculating the volatility of the portfolio.

𝑐!,! is the capital value of asset i at time t corresponding to the intended capital weight for the asset relative to the total capital value of the portfolio.

𝑝!,! is the price of asset i at time t.

The first formula sets the initial number you need of each asset. The second line of formula is the rebalancing formula that is used at the first trading day of each month.

The third formula shows the amount of capital that should be spent on each asset to hit the intended weights. The fourth formula shows the dynamic weights that are the value of the capital of asset i, divided by the total value of the portfolio. Finally the last formula shows the calculation for the total value of the portfolio, which is the value added from the rise/fall in prices plus the value of the portfolio the period before.

Volatility Weighted Portfolio

(32)

In recent years an alternative method of portfolio composition has emerged. The “Risk Budgeting” approach focuses instead on the risk attached to each asset class than the value of this asset. A total amount of risk for the portfolio is determined, and from there it is decided how big a portion of this risk that should be coming from each asset class. In a situation without any correlation between the assets this should be a fairly simple calculation, using the amount of capital invested in each asset would serve as the weight and the volatility would then determine this amount. However, the correlation between the different assets complicates the calculations and it is therefore necessary to use statistical software to get the right weights for each asset in order to get the desired volatility for the portfolio. After the implementation of risk-budgeting portfolio optimization software (in this paper, the “riskParityPorfolio” package in R was used), the most likely scenario is that the sum of the weights will not equal to 1. Because the relative differences in volatility most often do not match up with the desired weight of the volatility from each asset class, the problem is unsolvable if the constraint of having the weights sum to 1 is forced upon it. The solution is the implementation of what in financial terms is called “Gearing”. The idea is that you use the financial capital you have to take on the risk of the portfolio you cannot afford. This can be done in different ways, i.e. one could make a deal with a broker on futures contracts based on the assets in the portfolio. This means that you will pay the broker if the portfolio decreases, and the broker will pay you the returns if the desired portfolio increases. This way neither you nor the broker needs to put down the capital to own the portfolio, and the premiums for contracts like this will be considerably cheaper than the premiums of a loan. Your initial capital will then be used as collateral and payment of premiums for the agreement.

Another way to take on the desired risk of the unaffordable portfolio is you could use the cash at hand as collateral for a loan and then use the loan to invest in the desired portfolio, and then deduct the payment of the loan premiums from your return earnings.

Since there is not available forward contracts for all the assets of the portfolio used in this paper, it has been determined that the loan method will be used. The loan premium is, after discussions with the supervisor set to the 1-month EUR LIBOR rate, with an additional 25 basis points.

(33)

The calculations behind the “Risk Budgetting Portfolio” (RBP) are as follow (Palomar, 2019):

𝜎 𝒃 = 𝒃𝑻𝚺𝒃 𝑅𝐶! = 𝑏!(𝚺𝒃)

𝒃𝑻𝚺𝒃, 𝑅𝐶! = 𝜎 𝒃

!

!!!

𝑅𝑅𝐶! = 𝑏!(𝚺𝒃)

𝒃𝑻𝚺𝒃 , 𝑅𝑅𝐶! = 1

!

!!!

𝑅𝐶!!"# = 𝒃𝟎∗𝜎 𝒃 , 𝑅𝑅𝐶!!"# =𝒃𝟎

𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒

!!!, !!!

1

2𝒙𝑻𝜮𝒙− 𝒃𝟎log (𝒙𝒊)

!

!!!

Where:

𝜎 𝒘 is the volatility of the whole portfolio

b is the relative capital weights to the total capital value of the portfolio. b sums to 1.

𝒃𝟎 is the intended risk weights of the assets relative to the total risk. It is a vector of constants and they are not the same as b that reports on capital weights.

𝒙 is a vector that solves the min problem. It is like b the relative capital weights of the assets compared to the whole portfolio.

𝑅𝐶! is the risk contribution of asset i. The sum of all RC is 𝜎 𝒘

𝑅𝑅𝐶! is the relative risk contribution of asset i and the sum of RRC is 1

𝑅𝐶!!"# is the risk asset i must have to meet the intended relative risk target compared to

the total risk of the portfolio.

𝑅𝐶!!"# is the risk of asset i.

𝑅𝑅𝐶!!"# is the intended relative weights for the risk of asset i compared to the total risk.

The last equation is a minimization problem that solves for x, given 𝜮, and 𝒃!! as input along with the total dataset. The vector x that comes from solving the problem is the relative capital weights of each asset you need to match the risk weights from 𝑅𝑅𝐶!!"#.

(34)

The constraints of the model is that x sums to 1 and all x ≥ 0. The calculations to solve the problem has been carried out with the “riskParityPortfolio” package in R.

This information is then used at the first trading day of each month when the portfolio

needs rebalancing. The weights of the relative risk for each asset used for rebalancing purposes looks as follows

The intended total risk taken on by the portfolio is 7%, as this is the long term average for the Nominally Weighted Portfolio. To reach 7% for the whole portfolio, the relative capital weights must be multiplied 7%/ 𝜎 𝒙 and the sum of the weights will then not necessarily equal to 1 and the portfolio will need leverage. Each rebalancing period the leverage the portfolio needs is calculated and then paid. The calculation to do so looks as follows:

𝐿𝑒𝑣𝑒𝑟𝑎𝑔𝑒= 𝜎!

𝑥!,!Σ𝑥!,!! ∗ 252 𝑏!,! =𝑥!,!∗𝐿𝑒𝑣𝑒𝑟𝑎𝑔𝑒

𝑤!,! =𝑏!,!

𝑝!,!

𝑃𝐹 𝑉𝑎𝑙𝑢𝑒! =𝑃𝐹 𝑉𝑎𝑙𝑢𝑒!!!+ (𝒘𝒊,𝒕!𝟏∗𝒑!,!)

!

− (𝒘𝒊,𝒕!𝟏∗𝒑!,!!!)

!

− b!,! −𝑃𝐹 𝑉𝑎𝑙𝑢𝑒!

!

𝐸𝑈𝑅1𝑀!+0,25 100∗252 Index MSCI

US

MSCI EU

MSCI ASIA

US 10Y

CAD 10Y

JPY 10Y

AUD 10Y

UK 10Y

GER 10Y

FRA 10Y

High Yield

Weights 20% 15% 15% 12% 4% 8% 4% 4% 4% 4% 10%

(35)

Where:

𝑏!,! is the relative capital weights of asset i at time t. In a RBP the b’s do not sum to 1.

Since an RBP does not have the restraint of reinvesting the portfolio, there is no need for a 𝑐!,! to account for capital value of each asset.

𝑥!,! is the relative capital weights that solve the min problem. The x’s sum to 1.

𝑤!,! is how many units of asset i you have at time t.

Except for the value of the portfolio that is calculated daily, the above calculations will be carried out once a month when the portfolio is rebalanced. 𝑏!,! is the relative weights needed in order to maintain 𝜎! which is the desired yearly volatility for the Nominally weighted portfolio. Notice that the sum of 𝑏!,! does not have to be 1, as risk budgeting very often involves gearing in order to meet the desired volatility. Below shows how the weights change over time for the different assets, notice that added together they will be well above 1:

(36)

When calculating the value of the portfolio, you always add/deduct the result from the period before, and then deduct the premium for the leverage you have taken on. The leverage taken on is the total value of the leveraged portofolio minus the value of the portfolio thus far.

Test Statistics

Akaike Information Criterion

To assess which model fits the volatility of the returns best, the Akaike Information Criteria (AIC) is used. The method of diagnostic is calculated as follows (Ruppert, Matteson, 2015):

𝐴𝐼𝐶 = −2𝐿𝐿 𝑁 +2𝑚

𝑁

where LL is the log–likelihood, N the number of observations and m the number of estimated parameters. The log-likelihood is the natural logarithm of the likelihood function that estimates the probability of a specific model producing the exact

observations obtained. The logarithm of this likelihood function is used to find the best parameters in the model. This is why the likelihood function is an integral part of the AIC, to determine which model describes the data best.

Ljung-Box Tests

Apart from the AIC, a weighted Ljung-Box test is made on the standardized residuals when testing for autocorrelation at the different lags and it is based on the standardized squared residuals when testing for autoregressional heteroskedasticity at the different lags. The null-hypothesis test for the Ljung-Box Test states there is no presence of autocorrelation in the residuals. If the p-value is below alpha then the null-hypothesis is rejected, this means at least one autocorrelation is assumed to be present. For the Ljung- Box Test on the standardized squared residuals the null-hypothesis states no presence of heteroskedasticity. A below alpha p-value will reject this hypothesis meaning; there is signs of heteroskedasticity in at least one of the lags. The test statistic returned is Q and

(37)

it is calculated as follows for the test on standardized residuals (Ruppert, Matteson, 2015):

𝑄!= 𝑛(𝑛+2) 𝑟!!/(𝑛−𝑗)

!

!!!

Where 𝑟!! is the residuals for the j’th observation. It is a chi-square test with k degrees of freedom. If the p-values are below .05 the null-hypothesis can be rejected with 95%

certainty.

Arch LM Test

The ARCH LM Test is a test based on the Lagrange multiplier principle. The model is regressed to attain the residuals and then 𝑢!! is regressed by using a constant and m lagged values. The formula looks as follows (Ruppert, Matteson, 2015):

𝑢!! = 𝜔+𝛼!𝑢!!!! +𝛼!𝑢!!!! +⋯+𝛼!𝑢!!!! +𝜀!

The test statistic is compared to a chi-square distribution with m degrees of freedom, and p-values of less than .05 will reject the null-hypothesis that there is no ARCH presence in the lags.

Kupiec Test

The Kupiec test measures the number of violations to see if this number is consistent with the chosen confidence level. As an unconditional coverage test it follows a binomial distribution where the null-hypothesis states the relative distribution of observations are equal to the true mean. The test itself is a likelihood-ratio function that looks as follows (Ruppert, Matteson, 2015):

𝐿𝑅!" =2𝑙𝑜𝑔 𝑝!(1−𝑝)!!!

𝑝!(1−𝑝)!!! , 𝑝= 𝑥 𝑇

Referencer

RELATEREDE DOKUMENTER

In this case, the single intercept specification achieves similar GPME estimate by assigning higher risk-neutral value, but also more negative risk adjustment, relative to the

Consistent with the theory, we show empirically that (1) insurers with more stable insurance funding take more investment risk and, therefore, earn higher average investment

The paper contributes to a deeper understanding of the management of industrial research by moving the focus from risk management and portfolio planning (decision and control

In sum, an increase in the regulatory capital requirement results in a higher capital ratio due to less asset risk and a lower capital buffer, while a decrease leads to a

H2: The category of Early Adopters presents different feelings around the risk of adoption (particularly on complexity risk) and is more willing to pay an higher price

It is assumed that the cheapest bids will be activated in the intraday market to provide the up- and downward regulation for countertrade, which will leave higher price bids

As the number of economic actors on the global market continues to grow, so too does the competition between them B which is why governments everywhere tend to attach a higher

Dealing with suffering is not directly linked to intestinal failure but closer to the patient’s life history. Suffering is unpredictable and the suicide risk is increased. Having