• Ingen resultater fundet

2.4 Social engineering-techniques

2.4.2 The psychological tricks of a social engineer

In this section, we describe specific “tricks” employed by a social engineer in common attack scenarios. It is a bit out of scope of this thesis (and the study line), so it is kept on a general level just to get a basic idea of the techniques. Concrete examples of their use in cyber attack scenarios are presented in Section 2.5.

57Seehttps://en.wikipedia.org/wiki/Pretextfor a definition.

The definition ofsocial engineering is in [37] given as a technique for“[. . . ] fooling people in to breaking normal security procedures.” and carried out by anyone. In [27] it is stated that the term“[. . . ] refers to the scams used by criminals to trick, deceive and manipulate their victims into giving out confidential information and funds.”.

In “The Social Engineering Cycle” above we see how the social engineer can acquire information using tricks to develop and exploit the trust of a victim. The social engineer needs to be aware of how he can affect the human into willingly perform whatever act he wants him to.

The human mind perceives and processes a lot of information unconsciously all the time. Under-standing these techniques enables a social engineer to exploit his target unknowingly or heedlessly.

Much research has gone into this. Of the more dubious kind are the research on neurolinguistic programming58 (NLP), which claims it is possible to “re-program” the brain to achieve some goal (both personal to imitate successful people, treat problems and manipulate others; the last is called dark NLP). The Wikipedia-page has numerous references to studies and sources discrediting this.

Daniel Kahneman is a renowned psychologist and professor knowledgeable especially in decision making and behavioral economics, for which he in 2002 received the Nobel Prize. In [28] has collected his work (performed together with late Amos Tversky); of interest here is the work on decision making and cognitive biases, which an social engineer can exploit to trick his target’s perception of some event.

Kahneman describes the brain’s thinking in two systems59: System 1 acts “Fast, automatic, frequent, emotional, stereotypic, subconscious.”60 “[. . . ] with little or no effort and no sense of voluntary control.” [28], while System 2 is “Slow, effortful, infrequent, logical, calculating, conscious.”61 and“allocates attention to the effortful mental activities that demand it [. . . ]. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.” [28]. Kahneman notes how “[. . . ] System 2 believes itself to be where the action is [. . . ]”, but actually System 1 is origin to the“[. . . ] impressions and feelings that are the main sources of the explicit beliefs and deliberate choices of System 2.”.

The engagement of System 2 requires attention and when drawn away, System 2 is disrupted.

A famous example hereof is the experiment “The invisible gorilla” by Simons and Chabris62 in which two teams passes balls between each other. The viewer is tasked to count the number of passes between the players of team 1 and ignore team 2; a task requiring attention put specifically on team 1. At some point during the video, a gorilla enters the stage for 9 seconds. Only about 50 % of viewers notices the gorilla.

[28] explains how this is an example of attention being allocated to System 2, but only towards one of the teams. In turn, System 2 is not available to process the basic input of the automatic

58https://en.wikipedia.org/wiki/Neuro-linguistic_programming

59Further descriptions and examples can be found on both the referenced Wikipedia-page and in [28]

60Fromhttps://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Two_systems

61Fromhttps://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow#Two_systems

62Two different versions of the experiment video are available on their homepage: http://www.

theinvisiblegorilla.com/videos.html

functions (seeing and orienting) of System 1. In the experiment, viewers would even deny a gorilla to have entered the stage, which illustrates how we can both “[. . . ] be blind to the obvious, and [. . . ] also blind to our blindness.”. [28].

In our everyday life, people often try to save energy, both physical and psychological. This also goes in terms of the two Systems, were we rely on System 1 (acting unconsciously and automatically) to give input to System 2 as described and often even process the input to save attention. It is this “function”, a social engineer might exploit in a number of ways.

There are many concepts in [28]; they are grouped into three sections: “Heuristics and biases”,

“overconfidence” and “choices”. All the concepts are closely related and there are way too many to describe here. Some of those that could be exploited, are summarized here:

Science of availability Judgment from how easy examples are brought to mind. Recent plane crashes can make people afraid of flying or the fewer examples can be thought of when judging own personality traits, the higher one ranks himself. Awareness campaigns actually targets the same: By presenting employees with a personal example of social engineering, they expect it as a possibility if they experience it in person.

Bad events/loss aversion The brain is wired to perceive a threatening face in a crowd of happy people, but not the other way around. An experiment showed the brain subconsciously went into “alert” having been showed a threatening set of eyes for 1002 of a second without System 2 was aware. Emotionally threatening words attract way more attention than

“happy” words, i.e. a threat or a sense of urgency will weigh heavily when deciding on a course of action.

We are also much more prone to avoid bad self-definitions than to pursue good ones, because System 1 processes them on behalf of System 2. This is leads to that “Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones.” [28].

WYSIATI The concept “What You See Is All There Is” (WYSIATI) covers covers a large range of biases originating from the Systems seeking coherence of information, but not completeness. It is a variety of judgment biases, including:

Confirmation bias To search for information that confirms pre-existing hypotheses.

Overconfidence Used to create sense of a complex world, the mind puts to much trust on the information at hand and suppress doubt of vital information missing and ambiguity.

Framing effects E.g. how 90 % survival rate sounds more promising than a 10 % mortality rate when standing next to your kin in a hospital bed.

Base rate neglect An example given is a description of a quiet and tidy male; when asked whether he is most likely to be a farmer or a librarian, people answer librarian due to the description of the person – even though there are 20 times more male farmers than librarians.

Later WYSIATI is also exemplified as “[. . . ] constructing for the best possible story from the information available.” [28].

Intuitive predictions A person is presented with some evidence and proposed a “target of prediction”. He will seek to create a link between the two, using concepts of WYSIATI and one’s norm/perception of some subject. Kahneman notes how surprisingly almost everyone has a perception of even the most obscure subjects, e.g. how a professional sports team manager will think during the game or when selecting players. People will not be able to point out how this norm was created, but it is there.

System 1 uses the associative memory here; it can reject false information, but smaller inconsistencies it cannot distinguish –“as a result, intuitive predictions are almost completely insensitive to the actual predictive quality of the evidence.” [28].

From the examples above, we can set actual social engineering tricks into context and get an understanding of why they, despite awareness efforts, work.

Intuitive predictions and WYSIATI is a strong driver in this. One can imagine how a social engineer might place a call to a target, claiming to be some specific person and present evidence mostly resembling what the target requires to hear/know to intuitivelyfit the social engineer with his claim. He might have been overconfident in connecting the evidence in a complex statement or call for action or by using confirmation bias, expecting to be the receiver of the call or having heard a plausible claim of identity, believing to understand the situation and the evidence given (e.g. a story or a fradulent e-mail); intuitivelyhe connects the two.

A list of “common social engineering methods” from [35] p.332 is shown in Section 2.4.1. We can see how some of the methods can work in the context of Kahnman’s theories; e.g. using insider lingo will enable the target to create a false picture of the social engineer of being “one of us”.

The attacker can alsopretext his target pretending to call to offer help or a systems update.

The reason this method works following the theory of Kahneman, is that the target intuitively will create a concept of reality, because “WYSIATI”: He will not consider the information that is not there (maybe incited by the social engineer using other tricks); the target is not only blind to what might to others appear obvious, but he also blind to the fact that he is blind in the first place – but it is convenient to the target: He as has reduced the question from “Who is he and is he allowed access?” to “Could he be a peer?”. In turn, this also helps strengthens the social engineer’s proposed story, as the target now has inferred the situation and identity of the social engineer himself instead of being told by him. This is creates a stronger case in the target’s mind.

Similarly, the list “warning signs of an attack” also contains methods, which we can relate to the concepts of [28]: Stressing urgency, threat of negative consequences, claim of authority and name dropping all contribute to establishing a context of high pressure/importance, which will lead the target to comply to avoid bad events.

Another theory can also explain to the attack method of cheating the target into thinking he is

interacting with a peer: The attacker can aim to create an aura of“belongingness”63.

From the Wikipedia-site on the topic, several studies are referenced on how people like to feel related to and understood by a counterpart. The drive to form and maintain social bonds is very strong and hence the feeling of “social relatedness” is associated with positivity and may also enhance feelings of self-worth [60]. On a similar note, conformity to a group is also important to an individual, so his actions can be influenced, if it is possible to trick the target into thinking that he is acting outside the group norm.

Methods of the social engineer as listed in Section 2.4.1 contains examples exploiting the above traits. By posing and acting as a fellow employee (using insider lingo) or name dropping, the attacker does exactly seek to create this feeling of “belongingness” with his target and subse-quently gain his trust.

Finally a theory named the“foot-in-the-door technique”64 can be used. It explains how a person is more inclined to agree to a large request, if he is posed with a “lesser” request first (smaller in e.g. work-load, cost or intrusion of privacy). [35] has a couple of examples of this, where e.g. the social engineer asks an employee for a bit of time to answer a survey and afterwards calls back to ask the employee to extract data for him (e.g. to print and mail an internal e-mail or read a guard duty schedule).