• Ingen resultater fundet

2.6.1 Informal Reasoning

A protocol is proposed by its designer and claimed to hold the expected security properties. However, the protocol may still contain potential subtle flaws. The proposed protocol is then analyzed by others to check if it is really secure as it was claimed. In the early days, cryptographic protocols were analyzed and tested in informal ways. An ideal instance is a thorough and complete test that checks all possible paths and status for the protocol without any omission. The-oretically, it could find every potential flaw at last. However, this is impractical for most cases. Alternatively, people set a limited number of test cases to check.

In this way, a potential flaw can be found only if certain test case is wisely included. Otherwise, it is failed to detect the flaw, while the flaw does exist.

As cryptographic protocols are more and more widely used, there is stronger demand to discover those potential flaws. Although informal analysis becomes more and more conscientious, it may still miss subtle but not trivial flaws. Some literature reported their findings in several well-known protocols. Most of these flaws and weakness are discovered after several years since the protocol was proposed.

• The three-messages protocol in the CCITT.X.509 standard was proposed in 1987 [10]. A parallel session attack was presented after 2 years, by Burrows, Abadi and Needham in 1989 [9].

2.6 Protocol Analysis 15

• Needham-Schroeder shared-key protocol was proposed in 1978 [23]. A possible replay attack on this protocol was reported by Denning and Sacco after 3 years in 1981 [11].

• Denning-Sacco public-key protocol was proposed in 1981 [11]. Abadi and Needham detected a possible masquerade attack after 13 years in 1994 [3].

• Needham-Schroeder public-key protocol was proposed in 1978 [23]. Af-ter 17 years, a man-in-the-middle attack on this protocol was discovered finally, by Lowe in 1995 [17].

These are only some of the examples. Why they take years? The main reason is that the security goals are informally stated and poorly understood [14]. Con-fidentiality is the most straightforward security property, but it is not the only goal. It is also very important to authenticate both sides of the communication.

Another important reason is that the attacker can be active and powerful. At-tack can affect the protocol in various ways, while they could act as a protocol participant or an external party or even a combination of the both. This makes the cases very complicated for informal reasoning. Thus, subtle flaws and weak-nesses may survive in some informal analysis.

2.6.2 Formal Approaches

Formal verification can significantly help to detect protocol flaws and can in-crease our understanding of a protocol by making essential properties explicit [27]. It can also help to yield general principles of secure protocol design [5].

Several formal methods have been proposed and already used for years in the analysis of cryptographic protocols. Some survey papers [19][20] gave detailed introduction to the formal approaches and related researches. In this section, we only refer to some of the major approaches in this field besides Paulson’s inductive approach.

BAN Logic

BAN logic [9], proposed by Burrows, Abadi and Needham, is well-known and have been widely used. It provides statements to idealize protocols into initial logic formulae. Some examples of the BAN logic statements are as follows.

P said X means the principalP sent a message containingX.

P sees X means the principalP receives a message containingX, and X is readable toP.

P believes X means the principalP acts as ifX is true.

X is fresh meansX has not appeared in any message at any time before the current session of the protocol.

BAN logic also provides several inference rules to apply to the logic formulae for reasoning about beliefs of a protocol. A set of beliefs are then derived from the initial logic formulae. The protocol is considered to be correct if the set of beliefs is adequate based on some predefined notions.

BAN logic has been used for years as a popular formal approach for verify-ing cryptographic protocols. It successfully detected some potential flaws for several protocols after they have been proposed for years. However, some pro-tocols that passed the verification by BAN logic were proposed to contain flaws.

One example is a faulty variant of Otway-Rees protocol that the nonce of the responder is not encrypted, which is exemplified in [26] and [27]. However, BAN logic is not adequate to detect its fault [27]. Another example is the Needham-Schroeder public-key protocol. It has been proved correct using BAN logic [9], but a possible attack against this protocol was then reported by Lowe in [17].

BAN logic is designed for reasoning about the evolution of the belief and trust of the participants in a cryptographic protocol [32]. As BAN logic models beliefs rather than knowledge, it can deal with authentication goals, while it is weak at reasoning about confidentiality. And it dose not clearly formulate the possible actions of an attacker. In spite of its inadequacy, BAN logic is still a simple and usable approach to detect some of the flaws. The logic is straightforward, and the proof is normally short and handmade, where machine proof is not required.

Model Checking

Model checking approaches model a cryptographic protocol as a finite state ma-chine, and verify the protocol by checking whether certain properties hold for all reachable states. The checking process could be automatically performed by some applications, known as the model checker. On the other side, model checking is restricted by the size of protocol. As the technique is based on state machine, it can only deal with simple protocols of small size.

2.6 Protocol Analysis 17

Most versions of the model checking approach are developed based on the Dolev-Yao Model [12]. One of the successful examples is the NRL Protocol Analyzer [21], which specifies an insecure state and tries all possibilities to find a path that could start from the initial state and reach the insecure state. The proto-col is claimed to be insecure if such a path can be found. Another well-known model checker is FDR [30]. In this method, the protocol and the property are separately specified using CSP [15]. The two sets of traces are then examined to check whether the protocol process is a subset of the property set, in order to verify the protocol.

The Lysa-Calculi

Lysa [8] is a process calculus designed for analysis of cryptographic protocols.

This is a more recent approach compared with the aforementioned ones. Lysa is patterned after Spi-calculus [2] which is also used for analysis of security protocols. Spi-calculus is based on the Π-calculus [22], but provides support for cryptography related operations. The derivation is illustrated in Figure 2.7.

Figure 2.7: Process Calculus [24]

Like most process calculus, Spi-calculus uses channels to enable communications between processes. However, the derived Lysa-calculi does not retain the con-cept of channel, which would be considered as the most significant difference

from Spi-calculus on syntax. Compared with the aforementioned calculus, Lysa is tiny but much powerful for modelling security protocols, and subject to au-tomatic analysis [24].

This approach formalizes the protocol as Lysa-process, then apply static analysis to verify the protocol. The static analysis was adjusted to be less complicated while making efforts to provide much useful information. Lysa is mainly fo-cused on verification of authentication. It was demonstrated that this approach is adequate for detecting several authentication flaws in both shared-key and public-key cryptographic protocols [8]. Developing on Lysa-calculus is still in progress. It has been proposed to be extended in future for several purposes.

However, Lysa-calculus, as well as Spi-calculus, is relatively difficult to grasp.

In addition, since time is not representable in Lysa, this approach cannot be used to analyze those protocols with timestamps.

Chapter 3

The Inductive Approach

This chapter introduces Paulson’s inductive approach [26] which is adopted for protocol verification in this thesis project. The protocol is formally specified using operational semantics. The protocol model is defined with induction. A trace is a list of events that have been taken place on a network system running the protocol. And each possible action by an agent extends the event trace.

The protocol is then formalized as a set of traces involving all possible traces.

Security properties are also formally specified and proved by provided induction rules. The proof is aided by the interactive theorem proof assistant Isabelle [25].

All the Isabelle theory files for verifying cryptographic protocols can be found in the folderAuth [1] at Isabelle/HOL/Auth. The theory dependencies for the main part of this division are illustrated in Figure3.1. Generally, basics for ver-ifying a cryptographic protocol are involved in three theories named Message, Eventand Public.

In this chapter, we introduce the basic types (§ 3.1) and operators (§ 3.3) for the inductive approach, as well as the definitions of agent’s knowledge (§3.2).

Figure 3.1: Theory Dependencies of Auth(part)