• Ingen resultater fundet

3. Theoretical framework

3.2 Behavioral Finance Theory

3.2.2 Psychology

According to behavioral finance, deviations from rationality tend to arise when people form beliefs, preferences and make decisions. Although the traditional established models related to decision making under risk largely have been built around the assumption of Bayesian behavior, experimental research has shown that the related conditions are systematically violated. The expected utility framework can therefore not accurately explain how people evaluate risky choices in a complex world where information is never complete or evenly distributed. To this end, a plethora of so-called non-EU theories have been created in the aim of corresponding with the empirical experimental evidence. The one to achieve the most widespread acknowledgement due to its scope of application is prospect theory. Prospect Theory

In contrast to traditional financial theory assumptions, behavioral finance views the cognitive capacity and computational abilities of human as being limited. Beliefs are not considered to be updated in a perfectly rational manner in which all relevant information is utilized and irrelevant information is disregarded. Rather, decision making is often

considered suboptimal and out of line with the preconceived notion of rationality, due to challenges of processing the information at hand. Experimental decision-making studies have particularly shown that people are vulnerable to the way choices are presented and described to them, i.e. the way options are framed. Even small changes in the framing of choices can cause large differences in human behavior (Shiller, 2015). Prospect theory, originally presented by Kahneman and Tversky in 1979, offers a descriptive explanation of how people deal with probabilities and decision-making under uncertainty. As previously mentioned, this theory departs from, and criticizes, the expected utility theory, due to empirical evidence that risky choices lead to behavior that is inconsistent with the basic axioms of utility theory (Kahneman and Tversky, 1979). In particular, the theory recognizes that during risky choices, people define value in relation to gains and losses rather than total wealth, replace probabilities with subjective decision weights and exhibit loss aversion rather than risk aversion (Wärneryd, 2001).

Figure 3.1: Value function (Kahneman and Tversky, 1979)

Figure 3.1 displays the value function proposed by Kahneman and Tversky (1979), which aims to describe how people value various risky prospects and outcomes. At the intersect of the x-axis and the y-axis we find the origin, which represents the reference point. This point is essential in this model, because it is from this point people evaluate different

prospects. Deviations from the reference point are perceived as either gains or losses relative to people’s status quo, and it is therefore the prospective change in the reference point that determines whether an outcome is perceived as a gain or a loss.

We can observe from the figure that the value function is concave in the region of gains and convex in the region of losses, displaying the tendency of people to be risk averse when it comes to gains and risk-seeking when they are faced with the probability of losing.

The value function is therefore steeper for losses than for gains, which illustrates that people are highly loss averse. In fact, experimental studies have shown that losses give a much stronger feeling of pain than a gain gives a feeling of joy, which has brought about the much-cited expression “losses loom larger than gains”. Consequently, in situations where people face an outcome with a negative expected value, they tend to become more risk-seeking in order to try to escape from a loss and the correspondingly painful feeling.

Conversely, the utility function within expected utility theory assumes risk symmetry, namely that risk aversion is a uniform characteristic of the utility function over wealth (Wärneryd, 2001). Heuristics and biases

An essential part of cognitive psychology deals with how people process information, and tries to explain where things go wrong during this process that results in suboptimal and irrational behavior. Because people have cognitive limits and are unable to process all available information at the same time, they employ what is referred to as heuristics to simplify the process of estimating probabilities and to facilitate their decision-making.

Heuristics are simple rules of thumb shaped through a person’s various experiences and encounters, and allows for faster decisions requiring less resources. Although heuristics are often helpful tools to make the world less complicated, they are related to various cognitive biases. That is, making shortcuts in decision-making often leads to deviations from

normative rational theory (Gilovich et al. 2002).


The representativeness heuristic has been demonstrated when people are dealing with probabilities, and refers to the tendency of making decisions based on stereotypes or the familiarity of patterns (Wärneryd, 2001). Contrary to the notions of Bayesian updating,

people often make neglect prior probabilities (base rates) and rather assume that previous patterns will repeat themselves. That is, people witness situations or events and find that they are representative of some characteristic, and go on to distribute probabilities based on this characteristic. For example, experimental studies have shown that people believe the probability of a basketball player to score another goal is higher if the player previously scored several times. Similarly, when asked which profession a person is likely to have based on a set of characteristics, respondents tend to answer the profession that is associated with a stereotype closely linked to the description of the person. They do so without taking into consideration the actual probability assigned to each option (Barberis and Thaler, 2003).

The same tendencies have been observed in the stock market, where the representativeness heuristic leads to the expectation that recent experiences and trends will repeat themselves in the future. Moreover, investors often overreact to new information because the

information fits into a certain pattern that has been observed in the past. Therefore, equities that have been associated with good news for a long period of time often become

overpriced, and yield low average returns afterwards (Wärneryd, 2001).


At the opposite side of the representativeness heuristic, there is what is referred to as conservatism (Barberis and Thaler, 2003). Instead of giving too much weight to new information, conservatism may lead people to react insufficiently due to inertia in changing one’s opinion. There is therefore an underreaction, and in the stock market, this can be observed by prices failing to be updated in line with the fundamental values of the equities.

Whereas overreactions reflected in stock prices are observed over longer time horizons of 3-5 years, underreactions are seen in horizons of 1-12 months. News are therefore slowly incorporated into the prices, and the prices tend to show positive autocorrelation over this time period (Wärneryd, 2001).

Overconfidence and self-attribution bias

In an effort to integrate both over- and underreaction in the context of securities markets, Daniel et al. (1998) bring up two central biases, namely overconfidence and

self-attribution. Overconfidence refers to the tendency of believing too much in one’s own

abilities. The bias often involves thinking that one’s abilities and estimates are better and more precise than those of others, and moreover, leads people to refrain from searching for evidence that could reject their beliefs. The related self-attribution bias refers to people’s tendency of attributing any success they achieve to their own talents and abilities, while blaming all failures on external factors or bad luck (Barberis and Thaler, 2003). The theory of Daniel et al. implies that these biases cause investors to overreact to private information they obtain themselves, that is, private information for example gathered from their own environment, that is not publicly known and available to all. At the same time, the investors underreact to public information signals. The initial overreactions to private information, caused by overconfidence, might continue if the investment outcomes are favorable and the investor attributes the success to his own abilities. Consequently, the self-attribution bias causes a positive short-lag autocorrelation in the price of the security.

However, because beliefs are corrected in the long term when the investor is made aware of the mispricing, the overconfidence ultimately leads to a negative long-lag

autocorrelation (Daniel et al. 1998).