• Ingen resultater fundet

Allowing Opponent to Backtrack

3 GAME CHARACTERIZATIONS 12

t,t¯0),(¯r,r¯0) inphavetgt(¯t) =tgt(¯r) and tgt(¯t0) =tgt( ¯r0). By the pigeonhole principle this modification of Γ(X, X0) would bound the length of any play by 2· |S| · |S0|+ 1 where |S| and |S0| are the number of states in X and X0, respectively. Furthermore, it is quite easy to see that the characterization result also holds for the modified game.

3.3 Allowing Opponent to Backtrack

Throughout this paper we shall take “backtrack” to mean trace backwards within the present observation.

With this interpretation, backtracking in an ordinary transition system means to trace backwards along the transition sequence observed. In terms of games, we can express this by allowing Opponent to do back wards moves like

tt,t¯0t0)(B,t,¯t¯0t0),

where the B is a directive to Player to play backwards on the longer of the sequences. Player must match with the move

(B,¯t,t¯0t0)t,¯t0).

It is easy to see that these additional rules do not give Opponent more op-portunities to beat Player, nor the other way around.

Proposition 3.3Two transition systems are bisimilar iff Player has a win-ning Strategy in their associated game with backtracking.

Backtracking in an independence model is much more interesting. Con-sider a simple transition system with independence X

consisting of a single independence square. Since i→a s1 and s1

b u are independent, the sequencet =i→a s1

b uinXrepresents the observation “a

and b in parallel”. Another representative of this observation is the sequence i b s2

a u, so this gives us two ways to backtrack within ¯t: Either along s1

b u, leaving behind the sequencei→a s1, or alongs2

a uleaving behind the sequence i→b s2.

In terms of the net representation of a k b from Section 1 this amounts to the following: After firing the a-transition followed by the b-transition you may naturally backtrack on the a-transition, since the firing of the b-transition has in no way affected the post-conditions of the a-transition.

In the above example the event represented byi→a s1 has an occurrence - namely s2

a u - ending in u. We say that i→a s1 is backwards enabled in the sequencei→a s1

b u. In general, a transitionti of a sequence ¯t is said to be backwards enabled iff it by repeated use of Axiom 2 of Definition 2.2 can be “pushed to last position in ¯t.” By Axiom 3, this is equivalent to requiring ti to be independent of all transitions tj in ¯t with j > i. This leads to the following formal definition.

Definition 3.4 For ¯t = t0· · ·tn1, a sequence in a transition system with independence X, and i∈ {0, . . . , n1}, we define

ti ∈BEn(¯t) iff∀j ∈ {i+ 1, . . . , n1}.tj I ti,

where I is the independence relation in X. If ti BEn(¯t) we define δ(i,¯t) to be the result of deleting the event corresponding to ti, i.e.

δ(i,¯t) = t0· · ·ti1si+1· · ·sn1

where si+1 ti+1, . . . , sn1 tn1 as in the following figure in which the squares are the unique completions defined in Section 2.

2 The backtracking game on transition systems with independence is a simple extension of the previously defined (forward) game. By introducing rules like

t,t¯0)(i, δ(i,¯t),¯t0) if ti ∈BEn(¯t)

3 GAME CHARACTERIZATIONS 14

we allow Opponent to backtrack on transitions which are backwards enabled.

The index i is a request to Player to play backwards on the i’th transition of the longer of the sequences. So the only way Player can respond to such moves is to use the rule

(i, δ(i,¯t),t¯0)(δ(i,¯t), δ(i,t¯0)) ift0i ∈BEn(¯t0),

provided, of course, that t0i is backwards enabled in ¯t0. Formally, we define the backtracking game Γ(X, X0) on transition systems with independenceX and X0 to be the structure (C, c0,, λ, W):

C =ω×Seqs(X)×Seqs(X0)∪Seqs(X)×Seqs(X0). Conventionally, writing configurations (i, δ(i,¯t),t¯0) and (i,¯t, δ(i,t¯0)) implicitly means that |t|=|t0|.

c0 = (ε, ε)

λ : P os(Γ(X, X0)) → {O, P} is defined by taking λ(¯t,t¯0) = O and λ(¯tt,t¯0) = λ(¯t,t¯0t0) = λ(i, δ(i,¯t),t¯0) = λ(i,¯t, δ(i,t¯0)) = P.

W =∅.

⊆C2 is defined by the following rules:

t,t¯0) tt,t¯0) if ¯tt∈Seqs(X)t,t¯0) t,t¯0t0) if ¯t0t0 ∈Seqs(X0)

tt,¯t0) tt,t¯0t0) if ¯t0t0 ∈Seqs(X0) & `(t0) =`(t)t,t¯0t0) tt,t¯0t0) if ¯tt∈Seqs(X0) & `(t) = `(t0)

t,t¯0) (i, δ(i,t),¯ t¯0) if ti ∈BEn(¯t)t,t¯0) (i,¯t, δ(i,t¯0)) if t0i ∈BEn(¯t0) (i, δ(i,¯t),t¯0) (δ(i,t), δ(i,¯ ¯t0)) if t0i ∈BEn(¯t0) (i,¯t, δ(i,t¯0)) (δ(i,t), δ(i,¯ ¯t0)) if ti ∈BEn(¯t0)

Definition 3.5 Two transition systems with independence X and X0 are Γ -equivalent, writtenX Γ X0, iff Player has a winning strategy in Γ(X, X0).

2

With a simple example we will now illustrate how backtracking distin-guishes parallelism from non-deterministic interleaving.

Example 3.6 Consider the transition systems with independence represent-ing the CCS-processes akb and a.b+b.a:

These systems are interleaving bisimilar but not Γ-equivalent, as we are able to define a winning counter-strategy τ as follows (here, pranges over all appropriate positions):

τ(ε, ε) = (ia s1, ε) τ(p·(ia s1, i0 a s01)) = (ia s1

b u, i0 a s01) τ(p·(ia s1

b u, i0 a s01 b u0)) = (0, ib s2, i0 a s01 b u0)

The point is, of course, that Player is unable to backtrack on index 0 in the sequence i0 a s01 b u0, as these transitions are dependent. 2 Distinguishing the transition systems with independence of Example 3.6 is, in fact, a minimum demand on any reasonable generalization of bisimulation to independence models. And following the reasoning of Example 3.6, the reader should not be surprised that backtracking may be used by Opponent to detect the partial order structures of configurations. However, it is more surprising that Γ-equivalence coincides exactly with the abstractly derived PomL-bisimilarity of Section 2. As a more interesting example, the reader may check that Opponent has a winning counterstrategy in the game asso-ciated with the systems of Example 2.5.

Theorem 3.7 Γ-equivalence coincides with PomL-bisimilarity.

Proof See Appendix B. 2

We also have a relational characterization of PomL-bisimilarity.

Definition 3.8 A relation T ∈ Seqs(X)× Seqs(X0) is a δ-bisimulation between X and X0 iff it satisfies the following axioms:

4 A PATH LOGIC 16

Ainit: ε T ε

Abisim:

1. ¯t T ¯t0 & ¯tt∈Seqs(X)⇒ ∃t0.(`0(t0) = `(t) & ¯ttT t¯0t0) 2. ¯t T ¯t0 & ¯t0t0 ∈Seqs(X0)⇒ ∃t.(`(t) = `0(t0) & ¯ttT t¯0t0)

Aδ :¯tT 0

( (ti ∈BEn(¯t)⇔t0i ∈BEn(¯t0)) &

(ti ∈BEn(¯t)⇒δ(i,¯t) T δ(i,t¯0))

Two transition systems with independenceX andX0 areδ-bisimilar, written X δ X0, iff there exists a δ-bisimulation between them. 2 The axiom Abisim is the usual “bisimulation axiom”, here formulated on sequences rather than states. So this formulation is also a simple extension of a well-known concept.

Theorem 3.9 δ-equivalence coincides with PomL-bisimilarity.

Proof See Appendix B. 2

4 A Path Logic

Just as interleaving bisimulations can be characterized as a relation over paths, we can interpret the Hennessy-Milner logic over paths rather than states. Following [HS85], we may add a past tense modality°a, where a is a label, and obtain a logic which still characterizes bisimulation for ordi-nary transition systems. However, interpreted over transition systems with independence, we obtain a logic which is easily shown to be sound forPomL -bisimilarity. Furthermore, the logic iscompleteif we restrict to systems which do not exhibit auto-concurrency, i.e. systems in which no two consecutive and equally labelled transitions are independent.

Let Assn be the following language of assertions:

A::=¬A | ^

jJ

Aj | haiA|°a A .

By convention, true is the conjunction over the empty set.

Definition 4.1LetXbe a transition system with independence and suppose t,¯r¯∈Seqs(X). Define In ordinary transition systems,°a is interpreted as “it was the case at the last moment - just beforea- thatA”. It seems natural for transition systems with independence to interpret°a as “a could have been the last action, and at the moment before a it was the case that A”.

Formally, let X be a transition system with independence and define the satisfaction relation |=X Seqs(X)×Assn by structural induction on assertions:

Definition 4.2 Two transition systems with independence X and X0 are Assn-equivalent iff they satisfy the same assertions, i.e. iff

∀A∈Assn.(|=X A⇔ |=X0 A).

2 For ordinary transition systems (without independence) this logic is charac-teristic for bisimulation.

Theorem 4.3Two transition systems are bisimilar iff they areAssn-equivalent.

Proof See [HS85]. 2

Example 4.4 To see the logic in action on transition systems with inde-pendence, let us return to Example 2.5. An assertion distinguishing the two systems ishai(hcitrue∧hbi°ha di). This assertion is satisfied by the right-hand

system, but not by the left-hand system. 2

4 A PATH LOGIC 18

As usual, the soundness proof is straightforward.

Proposition 4.5 If two transition systems with independence are PomL

-bisimilar, then they are also Assn-equivalent.

Proof See Appendix C. 2

Restriction to systems without auto-concurrency is essential for complete-ness.

Example 4.6 Consider two systems X and X0

which are identical except that the square inXis an independence square, whereas the square in X0 is not. These systems satisfy the same assertions,

but are certainly not PomL-bisimilar. 2

Proposition 4.7 If two non-auto-concurrent transition systems with in de-pendence are Assn-equivalent, then they are also PomL-bisimilar.

Proof See Appendix C. 2

As mentioned in the introduction, the restriction to systems without auto-concurrency has to do with the logic being based on labels. Replacing the backwards modalities°a, where a is label, with modalities°i, where i is an index, and defining

¯t|=X°i A iff ti ∈BEn(¯t) & δ(i,¯t)|=X A,

we obtain a logic which is complete for PomL-bisimilarity without restric-tions.

5 Conclusion

We have given concrete characterizations of PomL-bisimilarity on transition systems with independence. Our characterizations are easy to under stand and appear as conservative extensions of the corresponding characterizations of interleaving bisimulation.

The present work leaves open the decidability of PomL-bisimilarity for finite state systems. One approach would be to look for set W of winning positions, generalizing the notion of duplicates in our argument for decid-ability of bisimulation for ordinary transition systems. However, it is not quite clear what the appropriate generalization should be in the setting of transition systems with independence.

REFERENCES 20

References

[AJ92] S. Abramsky and R. Jagadeesan.

Games and Full Completeness for Multiplicative Linear Logic.

DoC 92/24, Imperial College of Science, Technology and Medicine, 1992.

[Bed88] M. A. Bednarczyk.

Categories of asynchronous systems

PhD thesis, University of Sussex, 1988. Technical report no. 1/88.

[Bed91] M. A. Bednarczyk.

Heredity History Preserving Bisimulations.

Unpublished, Draft of 1991.

[Ben84] J. Van Bentham.

Correspondence theory.

ln D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic, volume 2. Reidel, 1984.

[GG89] R. van Glaabek and U. Goltz.

Equivalence Notions for Concurrent Systems and Refimenent of Actions.

In MFCS ’89. Springer-Verlag LNCS 379, 1989.

[GKP92] U. Goltz, R. Kuiper, and W. Penczek.

Propositional Temporal Logics and Equivalences, In Concur ’92. Springer-Verlag LNCS 630, 1992.

[HM85] M. C. Hennessy and A. J. R. G. Milner.

Algebraic Laws for Non-determinism and Concurrency.

Journal of ACM, 32(1), 1985.

[HS85] M. Hennessy and C. P. Stirling.

The power of the future perfect in program logics.

In A. R. Meyer, editor, Information and Control, volume 67. Academic Press, Inc., 1985.

[JNW93] A. Joyal, M. Nielsen, and G. Winskel.

Bisimulation and open maps.

In LICS ’93, 1993. To appear in Information and Computation.

[Mil89] A. J. R. G. Milner.

Communication and Concurrency.

Prentice Hall, 1989.

[Old91] E.-R. Olderog.

Nets, Terms and Formulas

Cambridge University Press, 1991.

[Par81] D. M. R. Park.

Concurrency and Automata on Infinite Sequences.

In Theoretical Computer Science, 5th GL-conference. Springer-Verlag LNCS 104, 1981.

[Pra86] V. R. Pratt.

Modelling concurrency with partial orders.

International Journal of Parallel Programming, 15(1), 1986.

[RT88] A. Rabinoritch and B. Traktenbrot.

Behaviour structures and nets.

Fundamenta Informatica, 11(4), 1988.

[Shi85] M. W. Shields.

Concurrent machines.

Computer Journal, 88, 1985.

[Sti93] C. Stirling.

Modal and Temporal Logics for Processes, 1993.

Notes for Summer School in Logic Methods in Concurrency, Department of Computer Science, Aarhus University.

[WN94] G. Winskel and M. Nielsen.

Models for Cuncurrency.

In S. Abramsky and D. Gabbay, editors, Handbook of Logic in Computer Science, volume 3. Oxford University Press, 1994.

A EVERY GAME HAS A WINNER 22

A Every Game Has a Winner

Proposition A.1 (Proposition 3.1 restated) For any game, there is a winning strategy iff there is no winning counter-strategy.

Proof Let Γ be a game. Obviously, if there is a winning strategy in Γ, there can be no winning counter-strategy.

For the converse direction, assume there is no winning counter-strategy.

Say τ a winning counter-strategy from (wcsf ) p∈P os(Γ) iff λ(p) =O & ∀σ.(p∈P lays(σ, τ)⇒ hσ|τi is lost).

Suppose now that λ(p) =O. It can then be shown that if Opponent has no winning counter-strategy from p, then

∀p·c∈P os(Γ).

à p·c∈W, or

∃p·c·c0 ∈P os(Γ).(O has no wcsf p·c·c0).

!

Using this property, we define a strategy cr as follows. Whenever Opponent has no winning counter-strategy fromp∈P os(Γ), andp·c∈P os(Γ)\W, we choose a configuration c0 such that p·c·c0 P os(Γ) and Opponent has no winning counter-strategy from p·c·c0. Define σ(p·c) =c0.

By definition, σ is a strategy. In order to argue that σ is winning, let τ be an arbitrary counter-strategy. Inductively in the length of p, it is easy to show the following property:

(Notice that our assumption that Opponent has no winning counter strategy amounts to the fact that Opponent has no winning counter strategy from the initial configuration c0.) But then, since σ is defined whenever it is Player’s turn to move, the play | τi must be won. This finishes the proof of the

proposition. 2

B Proofs From Section 3

As an intermediate step we introduce yet another equivalence, denoted iso-δ-bisimilarity. The main theorems 3.7 and 3.9 will be established by completing

the following picture in which ,→ denotes set inclusion:

The inclusions are labelled with references to corresponding propositions.

B.1 From δ-bisimilarity to Γ-equivalence

Proposition B.1 δ⊆∼Γ.

Proof Let T ⊆ Seqs(X)×Seqs(X0) be a δ-bisimulation between X and X0, and define a partial function σ : P os(Γ(X, X0)) C in the following way:

Whenever ¯t T ¯t0 and ¯tt∈Seqs(X), choose t0 such that ¯t0t0 ∈Seqs(X0) and ¯tt T t¯0t0. Then, for all tt,t¯0)∈P os(Γ(X, X0)), define

σ(p·tt,t¯0)) = (¯tt,¯t0t0).

In a similar way, σ is defined to respond to moves on the right-hand side.

Whenever ¯t T t¯0 and ti BEn(¯t), we know, since T satisfies Aδ, that t0i BEn(¯t0) and δ(i,¯t) T δ(i,t¯0). For all p · (i, δ(i,¯t),t¯0) P os(Γ(X, X0)) we define

σ(p·(i, δ(i,¯t),t¯0)) = (δ(i,¯t), δ(i,t¯0)) Here, too, there is a symmetric definition.

By construction, σ is a strategy. We now argue that σ is winning: For all counter-strategies τ it follows by induction in the length of p that

t,t¯0)∈P lays(σ, τ) & λ(p·t,t¯0)) =O ¯t T t¯0.

Hence, from construction it follows that hσ|τi is won. 2

B PROOFS FROM SECTION 3 24