• Ingen resultater fundet

The operational semantics we are going to define are based on a labelled transition system (lts for short) which determines a process’s ability to develop from one configuration to another. For the purpose of this we define the set of possible configurations. In the definition, actions of a set of atomic complementary action symbols, Act, disjoint but equipotent to Act is used. Furthermore a bijective map :Act−→Act.

Definition 3.2.1 BL is defined to be the least set C which satisfies:

BL⊆C

¯

a.p∈C if p∈C and a∈Act p1kp2 ∈C if p1, p2 ∈C

L is extended to BL by: L(¯a.q) = {a} ∪L(q).

The configuration language, CL, is defined to be the subset of BL where every subterm

of the form pkq has L(p)∩L(q) =∅. 2

Notice

i) L “forgets” whether a label belongs to Act or Act. So L(p) for p CL could be defined as taking L onp0 ∈P L, where p0 is p with all¯’s striped of.

ii) P L⊆CL and p+q∈CL only ifp, q∈P L.

2 In the sequel we will have the implicit requirements L(p)∩L(q) = whenever writing pkq.

What remains to define for the lts over CL and Act∪Act is the action relation.

Definition 3.2.2 Let−→ ⊆CL×(Act∪Act)×CL(writingp−→y qfor (p, y, q)∈ −→) be the least relation over CL which satisfies:

1) a.p−→a a.p¯ 2) ¯a.p−→a¯ p

3) p−→b p0

¯

a.p−→b ¯a.p0 4) p−→a p0

p+q −→a p0 q+p−→a p0

5) p−→y p0, y∈Act∪Act pkq−→y p0kq qkp−→y qkp0 where y∈Act∪Act and a, b,. . . range over Act.

Corollary 3.2.3 p−→y p0 ⇒L(y)⊆L(p),L(p0)⊆L(p).

The fact thatp−→y p0 impliesL(p0)⊆L(p) gives the well-definedness of the relation since then p, q disjoint implies p0, q disjoint too in 5).

Proposition 3.2.4 ∀a∈Act∀p∈CL.|{q|p−→¯a q}| ≤1

Proof Induction on the structure ofp. will be less than or equal to one.

2 Intuitively one can think of the a.pas the process which can be signaled to initiate action a and thereby transforming to ¯a.p. This term again represents a process which contains an action a signaled to initiate and which can signal it’s completion by transforming by

¯

a into p. The inference rule 3) says that more actions can be signaled to initiate before earlier signaled actions them selfs signal there completion. The termp+q represents the process which can act either as p or q, and pkq represents the process which can act both as p and as q, so actions of one subprocess can be signaled to initiate or complete independent of the other.

We extend the (atomic) action relation to strings over Act∪Act by:

p−→z p0 iff

On the basis of this and the notion of experiment we define how two processes are oper-ational semantically related.

We consider two statements about a process p and an experiment e:

p may accepte

p may reject e

The relations can then be defined as follows:

Definition 3.2.5 For processes p, q∈P Lwe define

p<∼a q iff for all experiments e: pmay accept e implies q may accept e p<∼rq iff for all experiments e: pmay reject e impliesq may reject e p<q iff p<∼a q and p<∼r q

2 The next thing to consider is which experiments we will allow and when a process may accept/ reject an experiment.

Anexperiment,e, will be split out into two. First a set of actionsAare signaled to initiate and second a test t is done on these. So

an experiment, e, is a pair: (A, t).

In fact the signaled set of actions can be considered as a multiset over Act because we want to be able to signal the same action more than once. If A is a finite multiset over Act and a ∈Act let |A|a denote the number of a’s in A. For a tree-semiword s and such a multiset A we write:

As=A iff ∀a∈Act.|A|a=ψ(s, a)

For a multiset A and a process p P L the set of possible configurations we can obtain by signaling A isD(A, p) defined as follows:

Definition 3.2.6 D(A, p) :={p0 ∈CL| ∃w∈W. Aw =A, p−→w p0} 2 Recall that W and Act are isomorphic, so it gives sense to write p−→w p0 for a w∈W. Notice that nondeterministic choices are made when signaling A to initiate.

The next to decide is the test languageT L. We will only allow tests on the actions which are signaled to initiate, so the language must be based on Act. It shall be possible to test the order in which the process can signal completions of the actions previously signaled to initiate, so if t is a test ¯a.t is a test too. If p is a configuration the test t&t0 denotes the test whether boththe test t and t0 are possible on p. Similart∇t0 is the test whether either t or t0 are possible. A test is ended with > to notify that the test was possible.

Definition 3.2.7 The test language, T L, is defined by the schema t ::=> |¯a.t, a∈Act|t&t |t∇t

2

Observe that for a test like t&t0 or t∇t0 there is no restrictions on the sorts of t and t0.

>—the successful test—is one of the two possible outcomes of a test. The other—the unsuccessful test—is denoted . When having a test like t&t0 one subtest can turn out to be successful and the other unsuccessful, so during the total test, subconfigurations like >& are possible.

Definition 3.2.8 The test configurations, T C, are defined by the schema:

o ::= (t, p), t∈T L, p∈CL| > | ⊥ |o&o|o∇o

2 A test is finished when it is known to be successful or unsuccessful, i.e., when one of the test configurations > orare reached. The relation between the different test configurations is determined by the test relation → ⊆T C ×T C defined below.

Definition 3.2.9 Let p, p0 ∈CL, t∈T L, o, o0, o00 ∈T C in the following.

Axioms:

1)2: (t2t0, p)→(t, p)2(t0, p) for 2∈ {&,∇}

2) (>, p)→ >

3) &>: o&> →o >& : >&o →o

&: o&⊥ → ⊥ & : &o → ⊥

∇>: o∇ > → > >∇: > ∇o→ >

∇⊥: o∇ ⊥ →o ⊥∇: ⊥ ∇o→o Inferences:

4)2: o→o0 o2o00 →o02o00 o002o→o002o0

for 2∈ {&,∇}

5) p−→a¯ p0, a∈Act

a.t, p)→(t, p0) 6) p−→6¯a , a∈Acta.t, p)→ ⊥ p−→6¯a is just a shorthand notation for 6 ∃p0 :p−→¯a p0.

Example:

a.¯b.>& ¯b.¯a.>,¯a.N ILk¯b.(c.N IL+d.N IL))

a.¯b.>,¯a.N ILk¯b.(c.N IL+d.N IL)) & (¯b.¯a.>,¯a.N ILk¯b.(c.N IL+d.N IL))

b.>, N ILk¯b.(c.N IL+d.N IL)) & (¯b.¯a.>,¯a.N ILk¯b.(c.N IL+d.N IL))

b.>, N ILk¯b.(c.N IL+d.N IL)) & (¯a.>,a.N IL¯ k(c.N IL+d.N IL))

(>, N ILk(c.N IL+d.N IL)) & (¯a.>,¯a.N ILk(c.N IL+d.N IL))

→ >& (¯a.>,a.N IL¯ k(c.N IL+d.N IL))

a.>,¯a.N ILk(c.N IL+d.N IL))

(>, N ILk(c.N IL+d.N IL))

→ >

Notice that this only is one of many possible derivation that leads >.

A test configuration o is called terminal iff o 6→ (i.e., 6 ∃o0 T C. o o0). In a moment we will show that the only possible terminal test configuration is exactly one of >and , such that a test is either successful or unsuccessful, and cannot be both. In this sense our notion of test is well-defined.

The fact that the terminal configurations are {>,⊥} and that one and only one of these can be reached from a test configuration, o, has as consequence:

∀p∈CL∀t∈T L.(t, p) > ⇔(t, p)6→

An experiment ecan now be considered as (A, t), whereAis the multiset over Act, which determines the actions that should be signaled to initiate, such that a test can be run on them, and t is the actual test to run.

Informally a process p may accept the experiment e= (A, t) if

a) It gives sense to run the test, i.e., the actions ofA can be signaled.

b) One of the processes p0 obtainable fromp by signaling Ato initiate, pass the test t successfully.

Similar p may reject (A, t) if under the same conditions as above one of the obtainable processes p0 pass the the test t unsuccessfully. Notice that we may have a process p and experiment e such that p may accept e and p may reject e! Also notice that the two statements are not dual. I.e., we do not have p may accept e implies p may reject e (wherep may accepte means it is not the case thatp may accept e). This is because the reason why p may accept ecan be that it does not make sense to run the test t, in which case we have pmay reject e too. Formally:

Definition 3.2.10 Denote the set of experiments by E. I.e., e∈E iff e= (A, t), A is a finite multiset over Act and t ∈T L.

Let p∈P L and e= (A, t) be an experiment. Then:

a) p may accepte iff ∃q∈D(A, p).(t, q) >

b) p may reject e iff ∃q∈D(A, p).(t, q)

2 Example:

a.b.N IL+b.a.N IL+a.N ILkb.N IL may accept ({a, b},a.¯¯b.>& ¯b.¯a.>) a.b.N IL+b.a.N IL may accept ({a, b},¯a.¯b.>& ¯b.¯a.>)

a.b.N IL+b.a.N IL+a.N ILkb.N IL may reject ({a, b},¯a.¯b.> ∇b.¯a.>& ¯a.¯b.>)) a.N ILkb.N IL may reject ({a, b},¯a.¯b.> ∇b.¯a.>& ¯a.¯b.>))

So we have now formally defined what was used in the definition of the three testing non of the equivalences are able to distinguish nondeterminism.

That p may accept e and p may reject e are not dual can now also formally easely be seen:

¬(∃q ∈D(A, p).(t, q) >) iff ∀q∈D(A, p)(t, q)6→ >

iff ∀q∈D(A, p).(t, q)

iff ∃q∈D(A, p).(t, q)

Before we give the promised proof of one and only one terminal configuration for every test configuration we prove the following lemma, which also clears the rˆole of & and . Lemma 3.2.11

3) &>,>&: W.l.o.g. assume o1 &o2 o is o1 &> → o1. Then clearly o2 > and o1 n−1 > which implies o1 >.

4) &: Two cases: o1 →o01, o =o01&o2 and o2 →o02, o=o1&o02.

For the former then o n−1 > means o01 &o2 n−1 >. By hypothesis of induction o01 > ∧o2 >. o1 o01, o01 > implies o1 >, so we are done for this case.

The latter is handled symmetric. This also completes the inductive step.

if: It is enough to proveo1 n>∧o2 m >implieso1&o2 >by induction onn+m.

n+m= 0: Then o1 = > = o2. No matter whether we use 3) &> or 3) >& we get o1&o2 → > and thereby o1&o2 >.

n+m >0: We split out in two subcases:

m >0: This implies o2 →o02 m−1 >. By hypothesis of induction o1 &o02 >. Using 4)& we now get o1&o2 →o1&o02, hence o1&o2 >.

n >0: Then o1 →o01 n−1 >. Similar to the case n= 0.

b) -d): Similar to a). 2

Now for an o ∈T C let B(o) denote the set of terminal configurations i.e., B(o) :={o0 ∈T C |o→ o0, o0 6→}

So what we shall prove is that {>,⊥} equals the terminal configurations and |B(o)| = 1 which follows from the proposition below.

Proposition 3.2.12 Let o∈T C. Then

a) {>,⊥}= terminal configurations b) o >∨˙ o where ˙ means that exactly one of the possibilities are true.

Proof

a) By inspection of definition 3.2.9 we easely see that > and are the only possible terminal configurations.

b) We split the proof out in two, i) o > ∨o and ii) o o0, o0 ∈ {>,⊥} ⇒ B(o) ={o0}, from which b) can be seen.

i) We prove o > ∨o by induction on the structure ofo.

o=> oro = (basis): Immediate.

o=o1&o2: By hypothesis of induction oi > ∨oi for i∈2. Four cases:

o1 >, o2 >: Using the if part of lemma 3.2.11 a) we get o =o1&o2 >. In the three other cases we get o→ using the if part of b) in the lemma.

o=o1∇o2: Similar.

o= (t, p), t∈T L, P ∈CL: To prove this part of the inductive step we use induction on the structure oft.

t=>: 2) of definition 3.2.9 gives us o= (t, p) = (>, p)→ > hence o→ >.

t= ¯a.t0: Clearly either 5) and 6) can be used. In the latter case we directly get o . In the former we have (¯a.t0, p)→ (t0, p0). By hypothesis of induction either (t0, p0) > or (t0, p0) and the result follows.

t=t1&t2: Using 1)& we get o = (t1&t2, p) (t1, p) & (t2, p). By the hypothesis of induction (ti, p) > ∨ (ti, p) for i 2. Similar as in the case o = o1 &o2 we get (t1, p) & (t2, p) > ∨(t1, p) & (t2, p) . Hence also o→ > ∨o→ .

t=t1∇t2: Similar.

ii) Since o0 ∈ {>,⊥}we can part the proof in two:

o0 =>: Then we have to prove o > ⇒B(o) ={>}. We will do this by induction on the structure of o. Since > 6→and o >implies > ∈B(o) we only need to prove o→ > ⇒B(o)⊆ {>}.

o=> oro = (basis): Looking at definition 3.2.9 we see ⊥ 6→ > and B(>) = {>}.

o=o1&o2: By the only if part of lemma 3.2.11 we have o1 > and o2 >. By hypothesis of induction we then haveB(o1) ={>}and B(o2) ={>}. Now assume B(o)6⊆ {>}. Since > and are the only terminal configurations (by a)) this implieso =o1&o2 and by theonly if part of lemma 3.2.11.b) we have o1 or o2 , which contradicts B(o1) = {>} and B(o2) = {>}. SoB(o)⊆ {>}.

o=o1∇o2: Symmetric.

o= (t, p), t∈T L, p∈CL: This part of the inductive step is also proved by struc-tural induction, but this time on the structure of t.

t =>: Looking at definition 3.2.9 we see that only 2) can be used, hence B(o) =B(>) ={>}.

t = ¯a.t0: Clearly only 5) or 6) can come under discussion. Two cases depending on p.

∃q. p −→¯a q: By proposition 3.2.4 there is at most one such q. Hence B(o) =B((t0, q)). By hypothesis of induction B((t0, q))⊆ {>}.

6 ∃q. p−→¯a q: This case can be excluded sinceo → ⊥is the only possibility,

⊥ 6→ and we assume o→ >.

t=t1&t2: The only possibility iso = (t1&t2, p)→(t1, p) & (t2, p) =o00 >

and B(o) =B(o00). Similar as in the caseo =o1&o2 we getB(o00)⊆ {>}. t=t1∇t2: Symmetric.

o0 =: Similar as the case o0 = >, but now t = > can be excluded and in t = ¯a.t0, 6 ∃q. p−→¯a q no longer can be ignored. In this last case we get o= (¯a.t, p)→ ⊥ and B(o) =B(⊥) ={⊥}, which is wanted.

2 In the following we will investigate other properties of our test language which will be useful in the following sections.

The most important property is that every test t has a normal form which we define in a moment. On the way to show this we introduce some notation.

Definition 3.2.13 For t, t0 ∈T Lwe let t =t0 denote:

∀p∈CL. B((t, p)) = B((t0, p)) or equivalently by the last proposition:

∀p∈CL.(t, p) >iff(t0, p)→ >

2

Proposition 3.2.14 = is an equivalence relation on T Lsuch that for all t, t0, t00 ∈T L:

a) t&t0 =t0&t b) t∇t0 =t0∇t

c) t& (t0&t00)= (t&t0) &t00 d) t∇(t0 ∇t00)= (t∇t0)∇t00 e) t∇(t0 &t00)= (t∇t0) & (t∇t00)

f) ¯a.(t&t0)= ¯a.t& ¯a.t0 g) ¯a.(t∇t0)= ¯a.t∇¯a.t0

h) t∼=t0

¯

a.) ¯a.t∼= ¯a.t0

&) t&t00 =t0&t00

) t∇t00 =t0∇t00

Proof That = is an equivalence relation is immediate from the definition.

a) – e) follows by lemma 3.2.11 from the similar properties of and . This is not the case with f) – h).

f) We shall prove (¯a.(t&t0), p) >iffa.t& ¯a.t0, p)→ >.

if: By definition 3.2.9 (¯a.t& ¯a.t0, p) > implies (¯a.t, p) & (¯a.t0, p) > which by lemma 3.2.11 implies (¯a.t, p) > and (¯a.t0, p) >. Now (¯a.t, p) > implies (¯a.t0, p)→(t0, p00) > wherep−→a¯ p0. Similar (¯a.t0, p)→(t0, p00) >, where p−→¯a p00. By proposition 3.2.4 we must havep0 =p00. Hence (t, p0) >, (t0, p0) >andp−→¯a p0. Using lemma 3.2.11 again we get (t, p0) & (t, p0) >. From this andp−→¯a p0, 5), 1)& of definition 3.2.9 we see (¯a.(t&t0), p)(t&t0, p0)(t, p0) & (t0, p0) >.

only if: By inspection of definition 3.2.9 we see (¯a.(t&t0), p) >implies (¯a.(t&t0), p) (t&t0, p0)(t, p0) & (t0, p0) >, wherep−→¯a p0. From this definition 3.2.9 directly gives us: (¯a.t& ¯a.t0, p0)a.t, p) & (¯a.t0, p)→(t, p0) & (¯a.t0, p)→(t, p0) & (t0, p0) >.

g) Similar, but here proposition 3.2.4 is not necessary!

h) Assume t∼=t0.

¯

a.): Let p∈P L, ¯a∈Act be given. Shall show that (¯a.t, p)→ >iffa.t0, p)→ >. This is evident from proposition 3.2.4.

&): Let a p∈CLand t00∈T L be given. Shall show (t&t00, p)→ > iff (t0&t00) >. if: (t0&t00, p) > implies (t0 &t00, p) (t0, p) & (t00, p) >. By lemma 3.2.11

this implies (t0, p) > and (t00, p) >. From t = t0 we then have (t, p) >. Reversing the arguments we obtain (t&t00, p)→ >.

only if: Symmetric.

): Similar.

2

Definition 3.2.15 t∈T Lis a test normal form iff t is of the form

&

jn

(

knj

¯

wjk>), wjk∈W fork ∈nj,1≤njwithj ∈n,1 n

where ¯w simply is the string w with every a inw exchanged with ¯a. 2 By a) – d) of proposition 3.2.14 it gives sense to use the notational convenience of & and

in the definition of a normal form.

Proposition 3.2.16 For every t ∈T L there is a normal formt0 ∈T L such that t∼=t0. Proof We can transform t into a normal formt0 by using a) – h) of proposition 3.2.14.

At first we use e) and f) to get all &’s of t out on the outmost level such that we obtain a t00 = &jntj, where tj is built from , ¯a ∈Act and > forj ∈n.

Then for every j n transform tj by g) into t0j = knjwjk>. By the congruence h) it

should be clear that t0 = &jnt0j =t. 2

By inspection of definition 3.2.9 the following corollary is evident.

Corollary 3.2.17 For all p∈CL, w∈W we have:

∃q. p−→w¯ q iff w¯> ∈T L,( ¯w>, p)→ >

Proposition 3.2.18 Let t be on the normal form: &jn(knjw¯jk>). Then for all p∈ CL:

(t, p) > iff ∀j ∈n∃k ∈nj.( ¯wjk>, p)→ >

Proof Follows immediately from lemma 3.2.11. 2