• Ingen resultater fundet

A reduction semantics for Curien’s calculus of closures

In document BRICS Basic Research in Computer Science (Sider 117-120)

10.4 From a reduction semantics with two layers of contexts to an abstract machine

10.4.1 A reduction semantics for Curien’s calculus of closures

There is a reduction semantics that embodies the reduction strategy of Curien’s normalizer.

It is a calculus equipped with a deterministic reduction strategy that leads to aβ-normal form when one exists. Though the underlying calculus will not necessarily be confluent, we can still use the reduction strategy to reduce terms to theirβ-normal forms.

The term language, contraction rules, and reduction contexts are mutually dependent.

Together, they are chosen to ensure unique decomposition, i.e., that a non-normal-form term can be decomposed into a reduction context and potential redex (either an actual redex or a

stuck term) in exactly one way. The syntax of terms is chosen so that they are closed under the contraction rules. We present first the term language, then the contraction rules, then the reduction contexts, but they do depend on each other.

Term language The configurations of the abstract machine from Section 10.3 should cor-respond to terms in context in the reduction semantics. The data-structure continuations of the machine are isomorphic to the reduction contexts, and thus the other components of the configurations must correspond to the terms of the reduction semantics.

The de Bruijn-indexedλ-term and substitution in the evaland lookupconfigurations represent closures, so the terms of the reduction semantics certainly includes the closures,c.

The right-hand side of the transition

eval(t t0, σ, C, M) =eval(t, σ, t0[σ]·C, M)

represents the application (in context) of closures,t[σ]t0[σ], so the terms include the appli-cationc c0 of a pair of closures. The applyC configurations can contain atomic forms (in context),a, so these are included in the set of terms.2 The right-hand side of the transition

applyC(•, (λt)[ρm], M) =eval(t, (1[id0]·ρm+1)0, , Λ·M)

represents an abstraction (in context) of a closure,λt[(1[id0]·ρm+1)0], so the syntax of terms includes abstractions of closures, λc. These considerations suggest that the terms of the reduction semantics are given by an extended grammar of closures:c ::=t[σ] | λc | c c | a. TheapplyM configurations containβ-normal forms (in context),n, but as can easily be checked, theβ-normal forms are already given by this grammar.

This grammar of terms (together with the contraction rules and contexts) creates an over-lap that causes unique decomposition to fail. Consider the term in contextC[(λt)[σ]c]. This could be a Beta-contraction in the contextC, or it could be a contraction that moves the sub-stitutionσinside the abstraction in the contextC[[ ] c], i.e., an Abs-contraction. The same idea used in defining the grammar of normal forms (page 17) resolves this overlap. We use a stratified syntax of terms that prevents abstractions from occurring as operators in appli-cations. The term language of the reduction semantics is as follows:

Atom d ::= t[σ] | d c | a ClosCM c ::= d | λc SubstCM σ ::= ρm EnvCM ρ ::= id | c·σ

Substitutions and environments are defined like for theCM-machine but with the above more general notion of closures. The set of values (i.e., terms without redexes) is exactly that of β-normal forms.

Notion of reduction If the configurations of the machine utilizing two stacks (Section 10.3) represent terms in context, then the contraction rules must be contained in the transitions of the machine. Specifically, when the left-hand and right-hand sides of a transition do not

2An atomic form,a, is aβ-normal form that is not an abstraction (page 17).

represent the same term when plugged into their contexts, then that transition implements one of the contraction rules. The other transitions, where the left-hand and right-hand sides represent the same term when plugged into their contexts, implement the compatibility rules of the reduction semantics.

The following six rules constitute the possible ways to perform reductions in the term language.

Reindex: i[idm] i+m Subst: 1[(t[ρm]·σ)m0] t[ρm+m0] Subst0 : (i+1)[(c·ρm)m0] i[ρm+m0] App: (t t0)[σ] t[σ]t0[σ] Beta: (λt)[σ]c t[(c·σ)0]

Abs: (λt)[ρm] λt[(1[id0]·ρm+1)0]

We define Weak=Reindex∪Subst∪Subst0∪App∪Beta, and Strong=Weak∪Abs.

Without the rule Abs allm’s would carry no information and they could be discarded.

The rules Subst, Subst0, App, and Beta would then constitute the contraction rules of Bier-nacka and Danvy’sλ^ρ-calculus (Section 5.2.1).3 This calculus, paired with standard reduc-tion strategies, e.g., normal order and (left-to-right) applicative order, syntactically corre-sponds to well-known machines: the Krivine machine [13, 15] and the CEK machine [32], respectively. In Section 5.2 we showed how to derive the Krivine machine extended to cope with basic constants. In Section 6.4 we outlined how to derive the CEK machine extended to cope with state variables.

In the next section, we will show that with the rules Reindex and Abs, this calculus (con-sidering a specific strategy) syntactically corresponds to the abstract machine in Section 10.3.

Reduction contexts The reduction contexts are (when represented inside-out) isomorphic to the stacks of the abstract machine in Section 10.3:

Cont C ::= [ ] | C[[ ]c]

MetaCont M ::= [ ] | M[C[a[ ]]] | M[λ[ ]]

The contexts,C, are the reduction contexts of normal-order reduction to weak-head normal form. These contexts can only be plugged with an atomd∈Atom (i.e., not with an abstrac-tion) according to the term syntax. The contraction rule Abs is the only rule that produces non-atom terms, so it cannot be applied in such a context. The meta-contexts,M, are exactly the contexts where the rule Abs can be applied: at the top of a term, or else in the operand position of an application of an atomic form that is in a normal-order weak-head reduction context in a meta-context, or else in the body of an abstraction in a meta-context.

A one-step reduction function The one-step reduction relation is defined according to the reduction contexts and contraction rules. We letM#CdenoteM[C], where the hole of the meta-contextMhas been filled with the contextC; the result is a term with a hole. Plugging this hole with the closurecis denoted byM#C[c].

We are now in position to define one-step reduction as the following relation on non-value terms:

M#C[c] 7→ M#C[c0], if(c, c0)Weak M#[c] 7→ M#[c0], if(c, c0)Abs

3Rules Subst and Subst0together constitute ruleιof theλρ^-calculus.

We will see that the relation7→is a function. The following property is needed to prove that it is well-defined:

Proposition 5

A closurecis either aβ-normal form or it can be uniquely decomposed into a contextM#Cand a potential redexpasuch thatc= M#C[pa]or a contextM#[ ]and a potential redexpcsuch thatc= M#[pc], where potential redexes are defined as follows:

PotRedexa pa ::= i[σ] | (t t)[σ] | (λt)[σ]c PotRedexc pc ::= (λt)[σ]

PotRedex = PotRedexaPotRedexc

The unique decomposition of a non-value closure into a context and a potential redex is ensured by Proposition 5. As a function,7→maps any closure to at most one other closure and is hence well-defined. It is partial because it is not defined onβ-normal forms, since they contain no potential redexes, and also because the potential redex1[(c·σ)m]is only an actual redex when the closurecis of the formt[σ0].

Normalization ofλ-terms The normalization of a de Bruijn-indexedλ-termtis defined by iterating the one-step reduction function until (possibly) a normal form is obtained, starting with the closure composed of t paired with the unadjusted identity environment id0. In other words, the normalization oftis defined if the reflexive transitive closure of the one-step reduction function7→relates this initial closure with aβ-normal form:

normalizet=n iff t[id0]7→n.

The function normalizeis well-defined because 7→ is a partial function not defined on β-normal forms.

In document BRICS Basic Research in Computer Science (Sider 117-120)