• Ingen resultater fundet

What is an experiment?

5 Experimental Methods

5.1 What is an experiment?

The development of the behavioral approach in economics is very much intertwined with the development of the experimental method in economics. Experiments in economics (as well as other sciences) are used to uncover the causal relation between variables. For example, policy makers might be interested in understanding whether reducing unem-ployment benefits (variable 1) lowers the reservation wage of benefit claimants (variable 2). As the answer to such a question can rarely be found by merely looking at reality as counterfactuals cannot be observed, experiments represent a very useful tool to inves-tigate such an issue. There are many different types of experiments, but they all share some common features:

(i) people are randomly allocated (either by an experimenter or by nature) to a so called control and a treatment group

(ii) the situation that the control and treatment groups faces are identical beside the treatment (often called the factor of interest)

(iii) the outcome of interest should be observable

Randomizing people into either a control or a treatment group should ensure that there is no selection effect and that the two groups are identical in terms of individual character-istics or experiences that can potentially confound the treatment results. Selection effects might occur, for example, if people can choose themselves whether they want to experi-ence the treatment or not. This could lead to a situation in which only certain ’types’

of people select into the treatment group which basically jeopardizes the possibility to analyses the impact of the treatment itself.

In order to identify the causal relation between a treatment variable and an outcome variable it is important that the two groups are identical and that they are confronted with exactly the same situation with the only difference being the treatment. In this way, one can be sure that any effect of the treatment on the outcome of interest is causal rather then merely correlational.

As said above, the role of experiments in social sciences is to identify causal links between factors of interest (for example, unemployment benefits) and outcomes of inter-est (for example, unemployment duration). More broadly, experiments have a threefold purpose (Ross(2014)):

(i) speaking to theorists: experiments are used to test theories that try to explain and describe certain social phenomena.

(ii) searching for facts: many experimental studies are explorative. They try to analyze how people behave in certain (choice) situations so as to uncover phenomena that are usually hidden from our eyes when merely looking at happenstance data collected from economic activity in the real world.

(iii) whispering into the ears of the princess: one clear goal of economic experimentation is the preparation of and assistance in policy making decisions.

With these three purposes in mind, what kind of experiments exist? We usually differentiate between three different kinds of experiments:

(i) Natural experiments

(ii) Laboratory and Internet experiments

(iii) Field experiments and randomized control trials (RCTs)

Natural Experiments. These are observational studies that take advantage of natu-rally occurring events or situations – happenstance data. Happenstance data is the by-product of uncontrolled, naturally occurring (economic) activity (Falk and Fehr (2003)).

To exemplify, considerAngrjst and Evans(1998). Their study explores the causal relation between family size on the labor market outcomes of mothers. Note that a simple corre-lation between family size and labor market outcomes of mothers does not tell anything about the causally relation between family size and the mothers’ labor market partic-ipation as both might be affected by an unobserved third factor such as the mothers’

preferences. The basis for the natural experiment used in their study is the observation that two-child families with either two boys or two girls are substantially more likely to have a third child than two-child families with one boy and one girl. As the gender of the first two children is ’random’ and hence there are no systematic differences between families with two same sex children and families with one boy and one girl, the sex of the

first two children forms a natural experiment. This is families are randomly allocated in families with two same sex first children and families with two first children of different sex. Or even more generally, the situation isas if an experimenter has randomly assigned some families to have two children and others to have three or more. Given this one is able to establish and quantify the causal effect of having a third child on the labor mar-ket outcomes of mothers. This study constitutes a natural experiment as the researchers could exploit a naturally occurring randomness to investigate the causal relation between family size and the labor market outcomes of mothers.

Laboratory and Internet Experiments. Different to natural experiments, labora-tory experiments are designed by experimenters that try to investigate the link between two variables of interest. Laboratories are usually computer facilities and a usual labora-tory experiment looks as follows. Participants get some instructions explaining to them the situation, the possible choices they have and their consequences. Two important rules in laboratory studies are (i) no deception and (ii) incentives. It is made clear to partic-ipants that the experimenters do not have a hidden agenda and that everything in the instructions is true. This rule is important to avoid that participants start to outguess the experimenters’ goal of the experiment which could impact their behavior. In fact, it is extremely important that participants fully understand the situation they are confronted with as well as the consequences of their decisions. It is people’s preferred choices that should be the basis for the analysis of any treatment effect, rather than their degree of confusion. For this reason participants are usually asked questions concerning the content of the instructions. Only upon answering these questions correctly, the actual experiments starts. This is, participants are confronted with a (strategic) decision and real incentives.

The participants’ decisions in experiments are connected to real (often monetary) incen-tives in order to ensure that they do not take random decisions or decisions that they think the experimenters want to see, but take decisions that reflect their true preferences.

Take as an exampleBoone et al. (2009) who experimentally investigate the impact of employment benefit sanctions on job search behavior. In their experiment they compare the behavior of people that participate in different so called sanction treatments with the behavior of people in no sanction treatment. Specifically, upon arrival at the economic laboratory, each participant was seated in a cubical and received written instructions.

Subsequently each participant played a single 100-period job search game. In the job search game participants are offered jobs in every period with varying wage levels and had to decide whether to accept a job in a given period or not. In case they did not have a

job in a particular period and refused to accept a job offer at a particular wage, they were facing a potential sanction. Their experiment consisted of three random benefit sanction treatments (differing in the size of the sanction) and a sure benefit control treatment.

Using this set-upBooneet al.(2009) experimentally analyzed key predictions of job search models regarding the causal relation between benefit sanctions and job search behavior.

Internet experiments are very similar to laboratory experiments with the difference that participants do not come to the laboratory but take part in the experiment online.

The advantages of online experiments of course are larger sample sizes, greater subject diversity and – in the case of Denmark – the possibility to link the behavioral data form Internet experiments with background characteristics of participants. Online experiments can be administered locally by researchers that program the experiment themselves (see, for example,the iLee program at Center of Experimental Economics / Economics Depart-ment at Copenhagen University) or can be conducted via e.g online labor markets like Upwork, Guru and Amazon Mechanical Turk.

An example of a self-administered online experiment is e.g. Bellemare, Sebald and Walzl (2015) (in preparation) in which we test in how far people’s reaction to subjective performance evaluation is influenced by their own perception regarding their performance.

To be more specific, we invited a representative sample of the Danish labor market (in total around 20000 people) to participate in an online experiment in which some had to work on a task (the workers) and others had to evaluate their work (the employers).

Evaluations were transmitted to the workers and workers were given the opportunity to react to the subjective performance evaluations they had received from their employer.

One of the major findings in this large scale experiment is the fact that in particular workers that are overconfident regarding their performance react negatively towards a performance feedback which is below their own-evaluation – they try to protect their self-image.

Clearly, although Internet experiments enable researchers to engage with larger and more heterogeneous subject pools, they have their drawbacks as well. One major draw-back is the loss of control. Control over the experimental environment (that is, every single aspect of the experiment) is a very important concept in experimental studies as control ensures that it is really only the treatment which differs between treatment and control group and not other aspects of the decision environment. Laboratory experiments are thus considered to ensure higher levels of control compared to Internet experiments allowing to be more sure about proper causal inferences. These two approaches - lab-oratory and Internet experiments - should thus be seen as complementary approaches

through which knowledge concerning the true nature of causal relations between variables can be identified.

Field Experiments and Randomized Control Trials. Field experiments are experi-ments that do not happen in the laboratory but are organized out in the field. Importantly, participants in field experiments are usually unaware that they are part of an experiment.

Beside this field experiments do have very much the same characteristics as laboratory experiments: people are randomized into treatment and control groups and real incen-tives are given. To exemplify consider Gneezy and List (2006) who invited people to two different field experiments involving a real effort task: computerizing the holdings of a library at a large US university. Specifically, they recruited people via posters that promised participants one-time work that would last six hours and that would pay $12 per hour. The field experiment consisted of two treatments: one in which the prean-nounced $12 was paid and another one in which people were informed upon arriving at the library that instead of $12 the hourly wage was raised to $20. Hence, participants in this second treatment got an unexpected hourly wage increase of $8 compared to the

$12 they had been promised before. Gneezy and List (2006) found: subjects initially repaid the kind surprise in the wage from $12 to $20 by providing higher effort compared to the treatment in which people were not surprised (the control group). This is what we call a gift-exchange. However, after working for 90 minutes on the job, effort levels were indistinguishable across the two treatments. Their field experiment was important because it showed that some experimental findings that are significant and robust over a shorter duration (usual lab experiments have a duration between 1 and 1.5 hours) might not hold in more realistic field environments with longer durations.

Similar to field experiments also randomized controlled trials (RCT) have become a very prominent methodological tool to investigate the effects of policy interventions.

Different to field experiments people participating in RCTs are usually aware of the fact that they are part of an experimental study. RCTs have been used, for example, to study the effects of vouchers for private schooling on school completion rates in Colombia (e.g. Angrist et al. (2002), Angrist et al. (2006)) and the effects of income subsidies on work incentives in Canada (e.g. Michalopoulos et al. (2005), Card and Robins (2005)).

Typically, RCTs are used in ex-ante small-scale evaluations of potential policies that could then be rolled out on a larger scale. In other words, they are used to ex-ante evaluate the effect of a general introduction of a policy on some social or economic outcome. Also in RCTs researchers assign individuals (or schools or villages) into treatment and control

groups. As in lab or internet experiments, individuals in the treatment group receive the policy treatment. Thereafter the behavior of the participants in the treatment group is compared to that of the individuals in the control group. The observed difference between the outcomes in the treatment and the control group is usually used as a predictor for the effect of a general introduction of the program.

A good example of a RCT is the Baltimore Options Program (Friedlander (1985)), which was designed to increase the human capital and, hence, the employment possibilities of unemployed young welfare recipients in the Baltimore Country. Half of the potential recipients were randomly assigned to the treatment group and half to the control group.

The treatment group individuals in this RCT received tutoring and job search training for one year whereas the members of the control group received the normal treatment.

The results from this study suggest that the earnings of the treatment group increased by 16 percent in response to the additional training they received indicating a potentially very effective social policy intervention.

6 Cases

In the following we will describe and discuss different examples of studies that have been undertaken in the area of behavioral economics and the labor market. Descriptions will concentrate on the essentials of the studies. For more details on the methodological approaches taken we refer to the original studies themselves. Over and above these cases that we discuss here, there are of course also a lot of experimental studies that test the predictions of standard search models in economics. For an overview of those see e.g. Plott and Smith (2008). The studies that we discuss below all have a ‘behavioral’ perspective and might thus be seen as an inspiration for people interested in ‘behavioral economics and the labor market’.

The first set of studies that we summarize are experimental investigations concen-trating on low-cost interventions that should help benefits claimants overcome certain informational and motivational challenges they face when searching for a new job. First, Altmann et al. (2015) conduct a large-scale field experiment with newly unemployed in Germany. The starting point of their study is the observation that the job search process is hampered by informational and motivational challenges. People might have inadequate information regarding the value of their skills to firms or which kinds of jobs to look for.

Furthermore, people might be frustrated and discouraged by the recent events in their life and the personal setbacks they have experienced. To overcome these two challenges

Altmann et al. (2015) conducted a field experiment in which a large randomly chosen group of unemployed got an information brochure informing them about (i) facts about the economy and current state of the labor market, (ii) the importance and effectiveness of job search activities, (iii) the non-pecuniary benefits from finding a new job (e.g. in terms of life satisfaction), and (iv) the importance of different search channels and strate-gies. Their empirical analysis is based on 53753 observations of which 13471 randomly chosen individuals received the brochure (treatment group) and 40,282 did not receive the brochure (control group). The results of the study can be summarized as follows. On aggregate individuals in the treatment group are on average employed approximately 1.3 days more then members of the control group that did not receive the brochure. The associated increase in total earning was around EUR 150 for the treated. Although these treatment effects are positive they are statistically insignificant implying that there is no significant difference between treatment and control group in the measured outcome variables.

The picture is different for a subpopulation though: people at ‘high risk of long term unemployment’. Using information on people’s background characteristicsAltmannet al.

(2015) identify people at high risk of long term unemployment and find a strong and significant treatment effect for this group. Specifically, the treatment caused an increase of the total number of days worked and the cumulative earnings in the year following the intervention of respectively 4.7 and EUR 450. Hence, the low cost intervention (the brochure costs less then EUR 1 per person) had a great impact on the people at risk.

‘The employment effects are concentrated in jobs with monthly earnings of more than EUR 1000. This indicates that the increase in employment does not come at the cost of lower wages and suggests that the brochure improves the employment prospects of individuals at risk of long-term unemployment without having detrimental consequences for the quality of resulting matches’ (Altmannet al., 2015, p. 4).

Second, Belot et al. (2015) conduct a laboratory experiment with job seekers to in-vestigate the effect of providing individual job seekers occupational advice on outcomes such as the amount of interviews and job offers received. The starting point for their analysis is the fact that unemployment insurance systems usually require people to also search beyond their narrowly defined occupational background. The question is: how do job seekers obtain information regarding occupations different from their own that could nevertheless fit their skills. This is the central issue in Belotet al. (2015)’s experimental study. The study uses a very interesting Lab-in-the-Field approach. Specifically, the au-thors recruited unemployed in Edinburgh from local Job Centres and invited them into

their lab which they had transformed into an online job search facility. Participants were asked to search for jobs once a week for a duration of 12 months using the search plat-form provided to them by the authors. About 300 participants participated. All of them searched for jobs using a standard interface for the first three weeks. ‘In each of these weeks participants on average list nearly 500 vacancies on their screen, they apply to 3 of them, obtain 0.1 interviews through search in our facility and 0.5 interviews through other channels, and the ratio of job offers to job interviews is only 1/25’ (Belot et al., 2015, p. 2). After this first phase 50% of these participants were randomly assigned to a treatment group. This treatment group was given information on a broader set of occupations compared to the control group that simply continued to operate on the same conditions as in the first phase. In a nutshell, the results are the following: on average the number of job interviews that people in the treatment received increased by 30%. The authors argue that this is mostly driven by job seekers who initially search narrowly and broaden their search radius upon receiving the treatment.

The above-mentioned two studies are examples of experiments that test low-cost in-terventions to increase peoples employment prospects. In a similar direction the UK Behavioral Insight Team (BIT) tested in a field experiment the effectiveness of different invitation SMS to encourage unemployment benefit claimants to attend job fairs. The aim of the job fairs is to bring benefit claimants into direct contact with firms offering va-cant positions. They designed different text message (from plain information to kind and personalized invitation messages) and found that text messages that created a reciprocal link worked best at encouraging people to attend the job fairs (see BIT (2015), pp 9-11).

The focus of the aforementioned studies were the unemployed, that is the benefit claimants. Another project by the UK Behavioural Insight Team concentrated instead on the job advisors and the processes newly unemployed go through when visiting job

The focus of the aforementioned studies were the unemployed, that is the benefit claimants. Another project by the UK Behavioural Insight Team concentrated instead on the job advisors and the processes newly unemployed go through when visiting job

RELATEREDE DOKUMENTER