• Ingen resultater fundet

Representativeness, Sample Size & Reliability

81

school, individuals were living through changes, barriers, and uncertainties. Because I gained access through a gatekeeper, this meant that youth were “out” about their ULS in some regard. I did,

however, learn that many youth had not yet participated in any sort of organization or social movement for immigration reform and most had not shared their ULS with anyone beyond their everyday peers.

However, some interviewees were currently in university; while they were out about ULS in their high schools, which illuminated an interesting dynamic about the contexts and complexities of coming out, which I will examine in further detail in my findings. Due to my research methods and sample size, I cannot conclude with certainty about the particular institutional or geographic contexts, but

acknowledge that these contexts may matter and represent areas for future investigation and comparison.

4.3.4 Snowball Sampling

As fieldwork progressed and I made connections, I used “snowball sampling” to recruit youth. With snowball sampling, researchers tap into the social networks of interviewees to gain access, trust, rapport, and legitimacy, all of which are particularly useful when dealing with sensitive topics or hard-to-reach populations; trust can be gained through trusted peer referrals (e.g. Atkinson & Flint, 2001;

Bergeron & Senn, 1998; Brackertz, 2007; Browne, 2005; Cebulko, 2014; Gonzales, 2011; Magnani, Sabin, Saidel, & Heckathorn, 2005). Browne (2005) wrote: “networks in these instances included word of mouth assurances which are significant when the research is of a sensitive nature” (p. 50). However, Brunovskis and Bjerkan (2008) acknowledged both ambivalence and difficulty in accessing additional interviewees through key interviewees in their study on undocumented immigrants in Norway. In my case, however, snowball sampling was critical for access to less open and otherwise out-of-reach individuals who may never participate in a research project otherwise.

82

and authorities due to fear of detection and deportation, and Gonzales (2011a) acknowledged that this daily fear poses various and significant challenges to random sampling. My sample is neither random nor representative. As my recruitment depended upon access, I have not reached the most vulnerable individuals. Further, thirty-three 1.5GUY do not represent 11.7 million undocumented immigrants.

Magnani et al (2005) wrote “individuals who have the wherewithal to obtain services, particularly in societies in which their behaviors are stigmatized, will be different from group members who do not seek and obtain these services” (p. 69) and Kvale (1996) emphasized that findings from self-selected samples cannot be generalized to the greater population. Because access required youth to be open about ULS or attached to an organization, and furthermore willing to participate, they likely constitute a more open sub-group. That youth met with me means that they do not purposely avoid all exposure and interaction with authorities or researchers. Thus, these 1.5GUY do not constitute the most

vulnerable group within this hard-to-reach population. Further, going through gatekeepers meant that while I stressed basic criteria—current age, age at arrival, and ULS—part of the access process was negotiated by teachers or counselors who suggested interviewees. Again, my aim was to explore, rather than explain or create generalizable findings. Forman and Damschroder (2008) have written that

“the goal of all qualitative inquiry is to understand a phenomenon, rather than to make generalizations from the study sample to the population based on statistical inference” (p. 41). I thus maintain that my recruitment methods are in accordance with the research purpose of exploring how 1.5GUY experience and cope with SofB in everyday life.

Due to recruitment methods, I was concerned interviewees would constitute only fearless activists and there would be little diversity amongst interviewees in terms of daily life navigation, openness of ULS, and reflexivity about how ULS shapes life; my worries soon dissipated. Firstly, several youth told me less than five individuals know their ULS. Secondly, though some clubs are organized specifically around undocumented rights, not all members divulge their ULS. Thirdly, some “members” did not participate; when asked about their involvement in the club, some youth indicated they were only on the email list, and even a few added “no one there is undocumented.” While this last statement is not accurate, it speaks to the nature of secrecy, trust, and non-disclosure even amongst pro-immigrant individuals in pro-immigrant organizations. Fourthly, snowball sampling allowed me to target newcomers who were not as open or experienced in sharing their immigration stories and fifthly, my

83

network allowed me to reach youth with neither attachment to organizations or reform activities; both approaches increased interviewee diversity.

4.4.2 Quantitative Quality: Sample Size & Saturation

In terms of interviewees sample size, “individuals designing research—lay and experts alike—need to know how many interviews they should budget for and write into their protocol, before they enter the field” (Guest, Bunce, & Johnson, 2006, p. 60). However, what this means in qualitative research praxis is less defined, if established at all (Guest et al., 2006). Bernard (2000) suggested most ethnographic research has thirty-six interviewees; Bertraux (1981) stipulated a minimum of fifteen; Morse (1995) specified that phenomenological research has at least six interviewees, but grounded theory and

ethnographies thirty five; Creswell (2007) suggested a range of five to twenty five for phenomenology;

Stebbins (2001) asserted that exploratory research has a minimum of thirty. From the onset and in accordance with exploratory research, I estimated thirty to forty interviews “to allow for the emergence of important categories and subcategories that will inevitably occur during the study” (Stebbins, 2001, p. 14).

Due to my two-phases of fieldwork, I entered the field, gathered data, exited, began initial analysis, explored for categories and repeated. After the first round, I was better-positioned to assess saturation, which Glaser and Strauss (1967) have defined to be when “no additional data are being found whereby the sociologist can develop properties of the category. As he sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated” (p. 61). The authors continued that though “the researcher’s judgment about saturation is never precise,” decisions about sample size are possible to undertake when “the researcher’s judgment becomes confidently clear only toward the close of his joint collection and analysis, when considerable saturation of categories in many groups to the limits of his data has occurred” (p. 64). Unlike statistical sampling, theoretical sampling is conducted with the purpose of discovering categories (Glaser & Strauss, 1967), which means that as I was able to systematically categorize data into themes, experiences, or theories, I became confident that I had reached saturation. However, Guest et al (2006) have argued:

Without a doubt, anyone can find, literally, an infinite number of ways to parse up and interpret even the smallest of qualitative data sets. At the other extreme, an analyst could

84

gloss over a large data set and find nothing of interest. In this respect, saturation is reliant on researcher qualities and has no boundaries (p. 77).

Thus, different researchers who are presented with precisely the same topic or even qualitative material may employ different epistemological, empirical, or theoretical approaches

4.4.3 Reliability & Validity

Validity and reliability are important to the research process. For data to be considered reliable, future researchers studying the same phenomenon should acquire the same or similar data using the same methods (e.g. Hammersley, 1987; Kvale, 1996; Kvale & Brinkmann, 2009). Validity entails if an accurate impression of a process, phenomenon, or group has been obtained (Stebbins, 2013), which further requires a researcher to question results to be sure that they are measuring what they purport to study (Hammersley, 1987; Kvale, 1996; Kvale & Brinkmann, 2009). While this seems straightforward, it is difficult to measure in praxis. Kvale (1996) explained “although a single interview can hardly be replicated, different interviews may, when following similar procedures in a common interview guide, come up with closely similar interviews from their subjects” (p. 65). However, Bush (2002) suggested that the flexibility of semi-structured interviews, which treat each individual as unique, makes it more difficult to ensure reliability as compared to surveys or quantitative methods. Further, Madill, Jordan, and Shirley (2000) wrote “qualitative approaches can be criticized for the space they afford the subjectivity of the researcher” (p. 1).

Thus, due to researcher subjectivity, interest, and foci, findings will likely vary. Kvale (1996) linked objectivity with validity: “objectivity as freedom from bias refers to reliable knowledge, checked and controlled, undistorted by personal bias and prejudice” (p. 64). However, Larkin, Watts, and Clifton (2006) wrote “the analytic process cannot ever achieve a genuinely first-person account—the account is always constructed by the participant and researcher” (p. 104). Finally, Dahlberg and Dahlberg (2010) explained that while the phenomenological approach is a rich way to study lived experiences, we all experience the same world very differently: “all of a sudden, for example, it becomes obvious that two persons listening to the same words one says, understand the said completely differently” (p.

35). I aim to produce a detailed, informed, and exploratory rather than explanatory account which is as close to the interviewee’s perspectives as possible. I acknowledge that as the author of this

85

dissertation, I am responsible for selecting the narratives which best demonstrate particular points.

Wherever possible, I include direct quotations alongside interpretations of experiences to allow youth’s own words to remain in focus. In short, qualitative researchers have many responsibilities to balance:

“a responsibility to hear what informants are saying about their lives and the meaning of their experiences,” “a responsibility to construct interpretations that may or may not conform to what

informants have told us,” and “an obligation to surround their words with analyses for which we are the authors” (Tappan, 1997, p. 651).

Kvale and Brinkmann (2009) have written “if subjects frequently change their statements about their attitudes…this is not necessarily due to an unreliable or invalid interview technique, but may in contrast testify to the sensitivity of the interview technique in capturing multiple nuances and the fluidity of social sciences” (p. 252). During my interviews, individuals may have appeared to contradict themselves, but I always asked follow-up questions. One area where this frequently

occurred was when I inquired about how they define citizenship, how they define American, and if they consider themselves American and/or citizens. Other times, when I asked individuals if they were fearful, many replied “no,” only to later recount a story where fear was either implicit or explicit. As I took notes during interviews, I was able to ask about these nuances and let youth know what they had replied earlier; sometimes they were surprised to hear what they had said, but took the opportunity to reflect. Some maintained that both answers were correct and provided a more nuanced account, all of which validate Kvale and Brinkmann’s statement. I consider youth’s statements to be their “truths.”

As Kvale (1996) has argued, “reality” is their perception. Each individual experience in the various narratives illustrates the ways, temporalities, and contexts in which SofB is created, challenged, and coped with and therefore valid.