• Ingen resultater fundet

CounterExample-Guided Abstraction Refinement

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "CounterExample-Guided Abstraction Refinement"

Copied!
78
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

CounterExample-Guided Abstraction Refinement

(CEGAR)

Martin Fr¨anzlea

(with many slides c S. Ratschanb)

a Carl von Ossietzky Universit ¨at, Oldenburg, Germany

b Czech Academy of Sciences, Prague, Czech Rep.

02917: CEGAR for HS – p.1/32

(2)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

(3)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

02917: CEGAR for HS – p.2/32

(4)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

(5)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

02917: CEGAR for HS – p.2/32

(6)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

(7)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

02917: CEGAR for HS – p.2/32

(8)

The problem

Abstraction is a powerful method for verifying systems

maps complex system (e.g., infinite state) to simpler system (e.g., finite Kripke structure)

simpler model may be amenable to automatic state-exploratory verification

but finding the right abstraction is hard

may be too coarse verification fails

may be too fine state-space exploration impossible

may even be too fine in some places and too coarse in others

(9)

The idea

In manual verification, we often add information on demand:

Upon a failing proof, we analyze the reasons and

add preconditions as necessary.

Can we do the same within abstraction-based model checking?

Upon a failing proof, let the model-checker analyze the reasons and

refine the abstraction as necessary.

02917: CEGAR for HS – p.3/32

(10)

The idea

In manual verification, we often add information on demand:

Upon a failing proof, we analyze the reasons and

add preconditions as necessary.

Can we do the same within abstraction-based model checking?

Upon a failing proof, let the model-checker analyze the reasons and

refine the abstraction as necessary.

(11)

Abstraction Refinement

Idea:

conservatively approximate the hybrid system by a finite Kripke structure (the abstraction)

if abstraction safe, done

while abstraction not safe, refine it

counter-example based: refine to remove a given spurious counter-example (Clarke et al. 03, Alur et al. 03)

02917: CEGAR for HS – p.4/32

(12)

Abstraction Refinement

Idea:

conservatively approximate the hybrid system by a finite Kripke structure (the abstraction)

if abstraction safe, done

while abstraction not safe, refine it

counter-example based: refine to remove a given spurious counter-example (Clarke et al. 03, Alur et al. 03)

(13)

Abstraction Refinement

Idea:

conservatively approximate the hybrid system by a finite Kripke structure (the abstraction)

if abstraction safe, done

while abstraction not safe, refine it

counter-example based: refine to remove a given spurious counter-example (Clarke et al. 03, Alur et al. 03)

02917: CEGAR for HS – p.4/32

(14)

Abstraction Refinement

Idea:

conservatively approximate the hybrid system by a finite Kripke structure (the abstraction)

if abstraction safe, done

while abstraction not safe, refine it

counter-example based: refine to remove a given spurious counter-example (Clarke et al. 03, Alur et al. 03)

(15)

Abstraction Refinement

Idea:

conservatively approximate the hybrid system by a finite Kripke structure (the abstraction)

if abstraction safe, done

while abstraction not safe, refine it

counter-example based: refine to remove a given spurious counter-example (Clarke et al. 03, Alur et al. 03)

02917: CEGAR for HS – p.4/32

(16)

Basic CEGAR

(17)

Spurious counterexample

Def: Let A C be an homomorphic abstraction wrt. abstraction

function h. Let φ be an ∀CTL formula and π = (c1, c2 . . .) be an anchored path of C witnessing violation of φ on C.

Then π is called a counterexample for φ on C. Furthermore, h(π) = (h(c1), h(c2), . . .) then is an anchored path

of A which violates φ, i.e. a counterexample on A. We do then call h(π) the abstract counterexample corresponding to π and we call π the concrete counterexample corresponding to h(π).

Def: If πA is a counterexample on the abstraction A C which has no corresponding concrete counterexample on C then we call πA a spurious counterexample.

02917: CEGAR for HS – p.6/32

(18)

Spurious counterexample

Def: Let A C be an homomorphic abstraction wrt. abstraction

function h. Let φ be an ∀CTL formula and π = (c1, c2 . . .) be an anchored path of C witnessing violation of φ on C.

Then π is called a counterexample for φ on C.

Furthermore, h(π) = (h(c1), h(c2), . . .) then is an anchored path of A which violates φ, i.e. a counterexample on A. We do then call h(π) the abstract counterexample corresponding to π and we call π the concrete counterexample corresponding to h(π).

Def: If πA is a counterexample on the abstraction A C which has no corresponding concrete counterexample on C then we call πA a spurious counterexample.

(19)

Spurious counterexample

Def: Let A C be an homomorphic abstraction wrt. abstraction

function h. Let φ be an ∀CTL formula and π = (c1, c2 . . .) be an anchored path of C witnessing violation of φ on C.

Then π is called a counterexample for φ on C.

Furthermore, h(π) = (h(c1), h(c2), . . .) then is an anchored path of A which violates φ, i.e. a counterexample on A. We do then call h(π) the abstract counterexample corresponding to π and we call π the concrete counterexample corresponding to h(π).

Def: If πA is a counterexample on the abstraction A C which has no corresponding concrete counterexample on C then we call πA a spurious counterexample.

02917: CEGAR for HS – p.6/32

(20)

Abstraction Refinement

Def: If C ≺ A0 ≺ A then A and A0 are called abstraction of C and A0 is called an abstraction refinement of A.

Idea: Whenever there is a spurious counterexample in A, identify an abstraction refinement A0 that lacks that particular spurious

counterexample.

(21)

Abstraction Refinement

Def: If C ≺ A0 ≺ A then A and A0 are called abstraction of C and A0 is called an abstraction refinement of A.

Idea: Whenever there is a spurious counterexample in A, identify an abstraction refinement A0 that lacks that particular spurious

counterexample.

02917: CEGAR for HS – p.7/32

(22)

CEGAR algorithm (simple version: invariants)

To verify C |= AGp do

1. build finite Kripke structure A C, 2. model-check A |= AGp,

3. if this holds then report C |= AGp and stop,

4. otherwise validate the counterexample on C, i.e., find a corresponding concrete counterexample,

5. if a corresponding concrete counterexample exists then report C 6|= AGp and stop,

6. otherwise use the spurious counterexample to refine A and restart from 2.

(23)

The crucial ingredients of CEGAR

Model checking,

validation/concretization of counterexample,

guided refinement of abstraction.

02917: CEGAR for HS – p.9/32

(24)

Validation of counterexample

Given: A C and an abstract counterexample φ = (a1, a2, . . . , an) on A.

Alg: Provided we can effectively manipulate pre-images of the abstraction morphism h, proceed as follows:

1. Compute S1 := h−1(a1) ∩ IC, where IC is the set of initial states of C,

2. For i = 2 to n, compute Si := h−1(ai) ∩ Post(Si−1). Abort as soon as some Si becomes.

In this case, the counterexample has been shown to be spurious.

3. In case of proper termination of the loop, the counterexample is real.

N.B. Assumes that h−1(ai), Post(Si), and their intersections are computable (in the sense of an effective emptiness test)!

(25)

Validation of counterexample

Given: A C and an abstract counterexample φ = (a1, a2, . . . , an) on A.

Alg: Provided we can effectively manipulate pre-images of the abstraction morphism h, proceed as follows:

1. Compute S1 := h−1(a1) ∩ IC, where IC is the set of initial states of C,

2. For i = 2 to n, compute Si := h−1(ai) ∩ Post(Si−1). Abort as soon as some Si becomes.

In this case, the counterexample has been shown to be spurious.

3. In case of proper termination of the loop, the counterexample is real.

N.B. Assumes that h−1(ai), Post(Si), and their intersections are computable (in the sense of an effective emptiness test)!

02917: CEGAR for HS – p.10/32

(26)

State splitting

Idea: For a set Ci = h−1(ai) of concrete states represented by an abstract state ai occurring in the spurious counterexample, split it into

Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), provided both non-empty (or into C1 IC and C1 \ IC in case i = 1).

Approach: Replace ai by two states a+i and ai representing Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), resp.

Technique: Replace the Kripke structure A = (V, E, L, I) by A0 = (V0, E0, L0, I0) with

V0 = V \ {ai} {a+i , ai }, where the latter are 6∈ V,

E0 = E (V0 × V0) {(a+i , ai ),(ai , a+i )} {(a, a+i ) | (a, ai) E} {(a, ai ) | (a, ai) E, a 6= ai−1} {(a+i , a),(ai , a) | (ai, a) E}

L0(v) =

L(v) if v V,

L(ai) if v {a+i , ai },

I0 =

I if Ci IC = ,

I \ {ai} {a+i } otherwise.

(27)

State splitting

Idea: For a set Ci = h−1(ai) of concrete states represented by an abstract state ai occurring in the spurious counterexample, split it into

Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), provided both non-empty (or into C1 IC and C1 \ IC in case i = 1).

Approach: Replace ai by two states a+i and ai representing Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), resp.

Technique: Replace the Kripke structure A = (V, E, L, I) by A0 = (V0, E0, L0, I0) with

V0 = V \ {ai} {a+i , ai }, where the latter are 6∈ V,

E0 = E (V0 × V0) {(a+i , ai ),(ai , a+i )} {(a, a+i ) | (a, ai) E} {(a, ai ) | (a, ai) E, a 6= ai−1} {(a+i , a),(ai , a) | (ai, a) E}

L0(v) =

L(v) if v V,

L(ai) if v {a+i , ai },

I0 =

I if Ci IC = ,

I \ {ai} {a+i } otherwise.

02917: CEGAR for HS – p.11/32

(28)

State splitting

Idea: For a set Ci = h−1(ai) of concrete states represented by an abstract state ai occurring in the spurious counterexample, split it into

Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), provided both non-empty (or into C1 IC and C1 \ IC in case i = 1).

Approach: Replace ai by two states a+i and ai representing Ci Post(h−1(ai−1)) and Ci \ Post(h−1(ai−1)), resp.

Technique: Replace the Kripke structure A = (V, E, L, I) by A0 = (V0, E0, L0, I0) with

V0 = V \ {ai} {a+i , ai }, where the latter are 6∈ V,

E0 = E (V0 × V0) {(a+i , ai ),(ai , a+i )} {(a, a+i ) | (a, ai) E} {(a, ai ) | (a, ai) E, a 6= ai−1} {(a+i , a),(ai , a) | (ai, a) E}

L0(v) =

L(v) if v V,

L(a ) if v {a+, a},

(29)

Resulting morphism

h0(c) =





a+i if c ∈ Ci ∩ Post(h−1(ai−1)), ai if c ∈ Ci \ Post(h−1(ai−1)), h(c) otherwise.

02917: CEGAR for HS – p.12/32

(30)

Refining E

0

: transition pruning

Observation: Pre- and post-images of h0−1(a+i ) or h0−1(ai ) may well have empty intersections with sets that the pre- or post-set of h0−1(ai) did intersect with.

In such cases, E0 contains spurious edges.

Solution: Remove such edges by pruning E0 to

E00 = {(s, t) ∈ E0 | Post(h0−1(s)) ∩ h0−1(t) 6= ∅}

(31)

Refining E

0

: transition pruning

Observation: Pre- and post-images of h0−1(a+i ) or h0−1(ai ) may well have empty intersections with sets that the pre- or post-set of h0−1(ai) did intersect with.

In such cases, E0 contains spurious edges.

Solution: Remove such edges by pruning E0 to

E00 = {(s, t) ∈ E0 | Post(h0−1(s)) ∩ h0−1(t) 6= ∅}

02917: CEGAR for HS – p.13/32

(32)

CEGAR algorithm (simple version: invariants)

To verify C |= AGp do

1. build finite Kripke structure A C, 2. model-check A |= AGp,

3. if this holds then report C |= AGp and stop,

4. otherwise validate the counterexample on C, i.e., find a corresponding concrete counterexample,

5. if a corresponding concrete counterexample exists then report C 6|= AGp and stop,

6. otherwise use the spurious counterexample to split states in A, 7. perform transition pruning on the resulting refinement A0,

Concrete version is just an example, variants of split/prune rules abound.

(33)

CEGAR algorithm (simple version: invariants)

To verify C |= AGp do

1. build finite Kripke structure A C, 2. model-check A |= AGp,

3. if this holds then report C |= AGp and stop,

4. otherwise validate the counterexample on C, i.e., find a corresponding concrete counterexample,

5. if a corresponding concrete counterexample exists then report C 6|= AGp and stop,

6. otherwise use the spurious counterexample to split states in A, 7. perform transition pruning on the resulting refinement A0,

8. goto 2.

Concrete version is just an example, variants of split/prune rules abound.

02917: CEGAR for HS – p.14/32

(34)

Application to hybrid systems

Above procedure is effective if h−1(ai), Post(Si), and their intersections are computable (in the sense of an effective emptiness test).

This is in general not true for hybrid systems.

⇒ Need to embed an appropriate form of approximation of the above sets into CEGAR.

(35)

CEGAR on hybrid states

Conservative approximation of state sets

02917: CEGAR for HS – p.16/32

(36)

Application to hybrid systems

“Naive” CEGAR procedure is effective if h−1(ai), Post(Si), and their intersections are computable (in the sense of an effective emptiness test).

In general not true for hybrid systems, thus embed an appropriate form of approximation of the above sets into CEGAR.

Main difficulty is computation of successor states: explicit (jumps) and implicit transitions (flows, defined by ODE)

Multiple shapes of overapproximation can be used

various effective representations of subsets of Rn:

rectangular boxes, zonotopes, polyhedra, ellipsoids, . . .,

multiple techniques for conservatively approximating hybrid transitions (jumps & flows)

(37)

Computing successors

CEGAR algorithm applies different approximations of successor computation in sequence,

proceeds from coarse to fine, investing more computational effort to increase precision only when necessary,

hope is that crucial deductions (absence of counterexamples, non-concretizability of a certain counter-example) can often be obtained on coarse abstractions,

CEGAR needs to compute different relative successors Succ(X, Y) = Post(X) ∩ Y, where X, Y ∈ P (Rn).

Can approximate these by any operation SUCC : P (Rn) × P (Rn) → P (Rn) with

1. Overapproximation: SUCC(X, Y) ⊇ Post(X) ∩ Y, 2. Reasonability: SUCC(X, Y) ⊆ Y.

02917: CEGAR for HS – p.18/32

(38)

Validation of counterexample

Given: A C and an abstract counterexample φ = (a1, a2, . . . , an) on A.

Alg: For a sequence of successively tighter overapproximations (SUCCi)i=1,...,k, proceed as follows:

1. Start with i = 1, i.e., the coarsest approximation.

2. Compute Si1 := overapproxi(h−1(a1) ∩ IC), where IC is the set of initial states of C,

3. For j = 2 to n, compute Sij := SUCCi(Sij−1, h−1(ai)) Abort as soon as some Sij becomes.

In this case, the counterexample is spurious.

4. In case of proper termination of the inner loop, restart at 1.

with i := i + 1, i.e., the next finer approximation, if i < k.

(39)

HSolver

Overapproximation via Constraint-based Reasoning

Stefan Ratschan, Czech Academy of Sciences Shikun She, MPII, Saarbr¨ucken

02917: CEGAR for HS – p.20/32

(40)

Starting Point: Interval Grid Method

Stursberg/Kowalewski et. al., one-mode case:

put transitions between all neighboring hyperrectangles (boxes), mark all as initial/unsafe

remove impossible transitions/marks (interval arithmetic check on boundaries/boxes)

Result: finite abstraction

(41)

Starting Point: Interval Grid Method

Stursberg/Kowalewski et. al., one-mode case:

put transitions between all neighboring hyperrectangles (boxes), mark all as initial/unsafe

remove impossible transitions/marks (interval arithmetic check on boundaries/boxes)

Result: finite abstraction

02917: CEGAR for HS – p.21/32

(42)

Starting Point: Interval Grid Method

Stursberg/Kowalewski et. al., one-mode case:

x˙ ∈ [−5, −1]

put transitions between all neighboring hyperrectangles (boxes), mark all as initial/unsafe

Result: finite abstraction

(43)

Starting Point: Interval Grid Method

Stursberg/Kowalewski et. al., one-mode case:

x˙ ∈ [−5, 1]

put transitions between all neighboring hyperrectangles (boxes), mark all as initial/unsafe

remove impossible transitions/marks (interval arithmetic check on boundaries/boxes)

Result: finite abstraction

02917: CEGAR for HS – p.21/32

(44)

Starting Point: Interval Grid Method

Stursberg/Kowalewski et. al., one-mode case:

x˙ ∈ [−5, 1]

put transitions between all neighboring hyperrectangles (boxes), mark all as initial/unsafe

(45)

Interval arithmetic

Is a method for calculating an interval covering the possible values of a real operator if its arguments range over intervals:

[a, A] + [b, B] = [a + b, A + B]

[a, A] · [b, B] = [min{ab, aB, Ab, AB}, max{ab, aB, Ab, AB}]

min ([a, A], [b, B]) = [min{a, b}, min{A, B}]

sin ([a, A]) =

"

min{sin x | x ∈ [a, A]},

max{sin x | x ∈ [a, A]}

#

f ([a, A], [b, B], . . .) =

"

min{f(~x) | ~x ∈ [a, A] × [b, B] × . . .},

max{f(~x) | ~x ∈ [a, A] × [b, B] × . . .}

# Theorem: For each term t with free variables ~v:

{t(~v 7→ ~x) | ~x ∈ [a, A]×[b, B]×. . .} ⊆ t (v1 7→ [a, A], v2 7→ [b, B], . . .)

02917: CEGAR for HS – p.22/32

(46)

Interval arithmetic

Is a method for calculating an interval covering the possible values of a real operator if its arguments range over intervals:

[a, A] + [b, B] = [a + b, A + B]

[a, A] · [b, B] = [min{ab, aB, Ab, AB}, max{ab, aB, Ab, AB}]

min ([a, A], [b, B]) = [min{a, b}, min{A, B}]

sin ([a, A]) =

"

min{sin x | x ∈ [a, A]},

max{sin x | x ∈ [a, A]}

#

f ([a, A], [b, B], . . .) =

"

min{f(~x) | ~x ∈ [a, A] × [b, B] × . . .},

max{f(~x) | ~x ∈ [a, A] × [b, B] × . . .}

#

(47)

Is the approximation tight?

1. In the limit: yes!

t(~v 7→ ~x) = t (v1 7→ [x1, x1], v2 7→ [x2, x2], . . .) t(~v 7→ ~x) = lim

ε0

t (v1 7→ [x1 − ε, x1 + ε], v2 7→ [x2 − ε, x2 + ε], . . .) provided t is uniformly continuous.

2. In general: No! If a < A then

x − x(x 7→ [a, A]) = [a, A] − [a, A] = [a − A, A − a] 6= [0, 0]

Dependency problem of interval arithmetic:

Tight bounds only if each variable occurs at most once!

02917: CEGAR for HS – p.23/32

(48)

Is the approximation tight?

1. In the limit: yes!

t(~v 7→ ~x) = t (v1 7→ [x1, x1], v2 7→ [x2, x2], . . .) t(~v 7→ ~x) = lim

ε0

t (v1 7→ [x1 − ε, x1 + ε], v2 7→ [x2 − ε, x2 + ε], . . .)

provided t is uniformly continuous.

2. In general: No! If a < A then

x − x(x 7→ [a, A]) = [a, A] − [a, A] = [a − A, A − a] 6= [0, 0]

Dependency problem of interval arithmetic:

(49)

Interval Grid Method II

Check safety on resulting finite abstraction if safe: finished, otherwise: refine grid;

continue until success

More modes: separate grid for each mode Jumps: also check using interval arithmetic

02917: CEGAR for HS – p.24/32

(50)

Interval Grid Method II

Check safety on resulting finite abstraction if safe: finished, otherwise: refine grid;

continue until success More modes: separate grid for each mode

Jumps: also check using interval arithmetic

(51)

Interval Grid Method II

Check safety on resulting finite abstraction if safe: finished, otherwise: refine grid;

continue until success

More modes: separate grid for each mode Jumps: also check using interval arithmetic

02917: CEGAR for HS – p.24/32

(52)

Interval Grid Method II

Check safety on resulting finite abstraction if safe: finished, otherwise: refine grid;

continue until success

More modes: separate grid for each mode Jumps: also check using interval arithmetic

(53)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

02917: CEGAR for HS – p.25/32

(54)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

(55)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

02917: CEGAR for HS – p.25/32

(56)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

(57)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

02917: CEGAR for HS – p.25/32

(58)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

(59)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

02917: CEGAR for HS – p.25/32

(60)

Discussion

Advantages:

can deal with constants that are only known up to intervals

interval tests cheap (e.g., compare to explicit computation of continuous reach sets, or full decision procedures)

Disadvantages:

may require a very fine grid to provide an affirmative answer (curse of dimensionality)

ignores the continuous behavior within the grid elements Let’s remove them!

(61)

Removing Disadvantages

reflect more information in abstraction without creating more boxes by splitting

Observation: we do not need to include information on unreachable state space, remove such parts from boxes

Method: form constraints that hold on reachable parts of state space, remove non-solutions by constraint solver

02917: CEGAR for HS – p.26/32

(62)

Removing Disadvantages

reflect more information in abstraction without creating more boxes by splitting

Observation: we do not need to include information on unreachable state space, remove such parts from boxes

Method: form constraints that hold on reachable parts of state space, remove non-solutions by constraint solver

(63)

Removing Disadvantages

reflect more information in abstraction without creating more boxes by splitting

Observation: we do not need to include information on unreachable state space, remove such parts from boxes

Method: form constraints that hold on reachable parts of state space, remove non-solutions by constraint solver

02917: CEGAR for HS – p.26/32

(64)

Removing Disadvantages

reflect more information in abstraction without creating more boxes by splitting

Observation: we do not need to include information on unreachable state space, remove such parts from boxes

Method: form constraints that hold on reachable parts of state space, remove non-solutions by constraint solver

(65)

Removing Disadvantages

reflect more information in abstraction without creating more boxes by splitting

Observation: we do not need to include information on unreachable state space, remove such parts from boxes

Method: form constraints that hold on reachable parts of state space, remove non-solutions by constraint solver

02917: CEGAR for HS – p.26/32

(66)

Reach Set Pruning

A point in a box B can be reachable

from the initial set via a flow in B

from a jump via a flow in B

from a neighboring box via a flow in B

Init

B

formulate corresponding constraints, remove all points from box that do not fulfill one of these constraints

(67)

Reach Set Pruning

A point in a box B can be reachable

from the initial set via a flow in B

from a jump via a flow in B

from a neighboring box via a flow in B

Init

B

formulate corresponding constraints, remove all points from box that do not fulfill one of these constraints

02917: CEGAR for HS – p.27/32

(68)

Constraints in Specification

We specify system using constraints:

Flow(s,~x,~x)˙ (e.g., s = off → x˙ = x sin(x) + 1 . . . )

˙purely syntactic!

even implicit and algebraic!

Jump(s,~x, s0,~x0) (e.g.,

(s = off ∧ x ≥ 10) → (s0 = on ∧ x0 = 0))

Init(s,~x)

(69)

Reachability Constraints

Lemma (n-dimensional mean value theorem): For a box B, mode s, if a point (y1, . . . , yn) ∈ B is reachable from a point (x1, . . . , xn) ∈ B via a flow in B then

∃t ∈ R0

^

1in

∃a1, . . . , ak, a˙1, . . . , a˙k[(a1, . . . , ak) ∈ B ∧

Flow(s, (a1, . . . , ak), (a˙1, . . . , a˙k)) ∧ yi = xi + a˙i · t]

xi

yi

ta time

0 t

Denote this constraint by flowB(s,~x,~y).

02917: CEGAR for HS – p.29/32

(70)

Reachability Constraints

Lemma: For a box B ⊆ Rk, mode s, if ~y ∈ B is reachable from the initial set via a flow in B then

∃~x ∈ B [Init(s,~x) ∧ flowB(s,~x,~y)]

Lemma: For a box B ⊆ Rk, mode s, ~y ∈ B, (s,~y) is reachable from a jump from a box B and mode s via a flow in B then

∃~x∈B∃~x ∈ B [Jump(s,~x, s,~x) ∧ flowB(s,~x, ~y)]

(71)

Reachability Constraints

Lemma: For a box B ⊆ Rk, mode s, if ~y ∈ B is reachable from the initial set via a flow in B then

∃~x ∈ B [Init(s,~x) ∧ flowB(s,~x,~y)]

Lemma: For a box B ⊆ Rk, mode s, ~y ∈ B, (s,~y) is reachable from a jump from a box B and mode s via a flow in B then

∃~x∈B∃~x ∈ B [Jump(s,~x, s,~x) ∧ flowB(s,~x, ~y)]

02917: CEGAR for HS – p.30/32

(72)

Reachability Constraints

Lemma: For a box B ⊆ Rk, mode s, if ~y ∈ B is reachable from a neighboring box over a face F of B and a flow in B then

∃~x ∈ F [incomingF(s,~x) ∧ flowB(s,~x,~y)] , where incoming(s,~x) is of the form

∃x˙1, . . . , x˙k[Flow(s,~x, (x˙1, . . . , x˙k)) ∧ x˙j r 0]

where r ∈ {≤, ≥}, j ∈ {1, . . . , k} depends on the face F

B

F

~ y

(73)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flowB(s,~x,~y) in linear case)

http://rsolver.sourceforge.net

02917: CEGAR for HS – p.32/32

(74)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flowB(s,~x,~y) in linear case)

http://rsolver.sourceforge.net

(75)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flowB(s,~x,~y) in linear case)

http://rsolver.sourceforge.net

02917: CEGAR for HS – p.32/32

(76)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flowB(s,~x,~y) in linear case)

http://rsolver.sourceforge.net

(77)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flowB(s,~x,~y) in linear case)

http://rsolver.sourceforge.net

02917: CEGAR for HS – p.32/32

(78)

Using Constraints

After substituting definitions, getting rid of quantifiers, interval

constraint propagation algorithms can remove parts from boxes not fulfilling such constraints.

correct handling of rounding errors

almost negligible time

result not necessarily tight (but tight for flow (s,~x,~y) in linear

Referencer

RELATEREDE DOKUMENTER

We have defined, in the case of transitions fusion, a modular state space, consisting of a synchronisation graph, the state spaces of the modules and the transition fusion

4 well it begins to look as if the black boxes were effortlessly gliding through space as a result of their own impetus‖ (1987, p. To trace black boxes that form around

– for group work the thesis must state who is responsible for the different parts of the thesis!. (possibly “everybody is responsible for all of the thesis”) – From

The filter is implemented as a standalone function whose inputs are the previous augmented state mean and covariance estimates as well as the input vector estimate u ˆ k and the

This gives a cleaner semantics to the restriction phenomenon, and further- more avoids the state-space explosions mentioned above. According to [28], we can guarantee that the

Six (6) of the boxes, the dashed line boxes, are composite parts, five (5) of them consisting of a variable number of atomic parts; five (5) are here shown as having three atomic

20 parts, actions, events and behaviours; attributes and possibly unique identifiers of parts, and mereology of composite (atomic) parts; subparts of composite parts; etc.... We

The purpose of the state elimination phase is to remove from the tableau all ‘bad’ states, the labels of which are not satisfiable in any Hintikka trace, and hence in any linear