Primitive Inductive Theorems Bridge

Implicit Induction Methods and Inductive Theorems

in Higher-Order Rewriting

KUSAKARI Keiichirou,SAKAI Masahiko,SAKABE Toshiki

Graduate School of Information Science,Nagoya University.

fkusakari,sakai,sakabeg@is.nagoya-u.ac.jp

Abstract.Automated reasoning of inductive theorems is considered important in program

veri¯cation.To verify inductive theorems automatically,several implicit induction methods

like the inductionless induction and the rewriting induction methods have been proposed.In

studying inductive theorems on higher-order rewritings,we found that the class of the theorems

shown by known implicit induction methods does not coincide with that of inductive theorems,

and the gap between themis a barrier in developing mechanized methods for disproving inductive

theorems.This paper ¯lls this gap by introducing the notion of primitive inductive theorems,

and clarifying the relation between inductive theorems and primitive inductive theorems.Based

on this relation,we achieve mechanized methods for proving and disproving inductive theorems.

Keyword.Algebraic Speci¯cation,Higher-Order Rewriting,Simply-Typed Term Rewriting

System,Primitive Inductive Theorem,Inductive Theorem,Implicit Induction Method.

1 Introduction

Term rewriting systems (TRSs) provide an opera-

tional model of functional programming languages.

TRSs also give a theoretical foundation of alge-

braic speci¯cation languages [8,9].In algebraic

speci¯cation,many interesting properties of pro-

grams can be formally dealt with as inductive the-

orems,characterized by the initial algebra seman-

tics [10,12,13,14,24].The concept of inductive

theorems is extremely important in practical appli-

cations.In fact,most data structures used in func-

tional programming are inductive structures such as

list and tree structures.As a result,most proper-

ties that a program must guarantee are formalized

as inductive theorems.

In order to verify programs,automated reason-

ing for proving and disproving inductive theorems

is very important.Hence many inductive rea-

soning methods in TRSs,called implicit induc-

tion methods (the inductionless induction and the

rewriting induction methods),have been studied

[6,7,11,16,21,22,23].

Higher-order functions,which can treat functions

as values,provide a facility of high-level abstraction

and more expressive power.Unfortunately,TRSs

cannot express higher-order functions directly.For

this reason,the ¯rst-author proposed simply-typed

term rewriting systems (STRSs) [17].In STRSs,

the Map-function,which is one of the most typical

higher-order function,is represented as follows:

½

Map(f;Nil)!Nil

Map(f;x::xs)!f(x)::Map(f;xs)

In this paper,we study inductive theorems on

STRSs for automated reasoning and these charac-

terization.We found that the class of the theo-

rems shown by known implicit induction methods

does not coincide with that of inductive theorems,

and the gap between them is a barrier in devel-

oping mechanized methods for disproving inductive

theorems.We ¯ll this gap by introducing the no-

tion of primitive inductive theorems,and clarifying

the relation between inductive theorems and primi-

tive inductive theorems.Based on this relation,we

achieve mechanized methods for proving and dis-

proving inductive theorems.

The main contributions of the paper are:

(1) We give a notion of primitive inductive the-

orem (De¯nition 4.1),and characterize induc-

tive theorems by primitive ones (Theorem4.6).

(2) We show that the existing implicit induc-

tion methods (the inductionless induction [23]

and the rewriting induction methods [16]) can

be naturally extended to STRSs for prov-

ing/disproving primitive inductive theorems

(Theorem 5.2,5.7 and 6.2).

(3) We show that the implicit induction meth-

ods are also applicable for proving inductive

1

theorems (Theorem 5.2 and 5.7),because ev-

ery primitive inductive theorem is an inductive

theorem (Theorem 4.2).

(4) For disproving inductive theorems,implicit

induction methods do not work well because

some inductive theorems are not a primitive

inductive theorem.To overcome the di±culty,

we present a su±cient condition for inductive

theorems to coincide with primitive inductive

theorems (Theorem 4.17).Under this su±-

cient condition,the implicit induction method

is applicable for disproving inductive theorems

(Theorem 6.3).

(5) To give a justi¯cation of our de¯nition of induc-

tive theorems,we design a higher-order equa-

tional logic,and show that the notion of in-

ductive theorems is characterized by the initial

extensional semantics (Theorem 7.10).

The remainder of this paper is organized as fol-

lows.The next section gives the preliminaries

needed later on.In Section 3,we give the de¯nition

of inductive theorems.In Section 4,we give the

notion of primitive inductive theorems,and char-

acterize inductive theorems by primitive inductive

theorems.We also present a su±cient condition for

inductive theorems to coincide with primitive in-

ductive theorems.In Section 5,we study automated

reasoning for proving inductive theorems.In Sec-

tion 6,we also study automated reasoning for dis-

proving inductive theorems.In Section 7,we study

higher-order equational logic,and show that the no-

tion of inductive theorems is characterized by the

initial extensional semantics.

2 Preliminaries

We assume that the reader is familiar with notions

of term rewriting systems [4].

2.1 Abstract Reduction System

An abstract reduction system(ARS) is a pair hA;!i

where A is a set and!is a binary relation on A.

The transitive-re°exive closure of a binary relation

!is denoted by

¤

!,the transitive closure is de-

noted by

+

!,and the transitive-re°exive-symmetric

closure is denoted by

¤

$.

Let R = hA;!i be an ARS.An element a 2 A

is said to be a normal form if there exists no b

such that a!b.We denote the set of all normal

forms by NF(R).An ARS R is said to be weakly

normalizing,denoted by WN(R),if 8a 2 A:9b 2

NF(R):a

¤

!b;to be strongly normalizing (terminat-

ing),denoted by SN(R),if there exists no in¯nite

sequence a

0

!a

1

!¢ ¢ ¢;to be con°uent,denoted

by CR(R),if a

1

¤

Ãa

¤

!a

2

implies 9b:a

1

¤

!b

¤

Ãa

2

for

all a;a

1

;a

2

2 A.For ARSs R

1

and R

2

,R

2

retro-

gresses to R

1

,denoted by RET(R

2

;R

1

),if a

1

!

2

a

2

implies a

1

!

1

b

1

!

¤

2

b

2

Ã

¤

2

a

2

for some b

1

and b

2

.

Note that SN(R) implies WN(R),and CR(R) im-

plies that the normal form of an element is unique

if it exists.

2.2 Untyped Term Rewriting Sys-

tem

Untyped term rewriting systems (UTRSs) intro-

duced in [17]

1

,which can express higher-order func-

tions directly,are term rewriting systems without

arity-typed constraints.In this subsection,we in-

troduce some notions of UTRSs needed later on.

Let § be a signature,that is a ¯nite set of

function symbols,which are denoted by F;G;:::.

Let V be an enumerable set of variables with

§\V =;.Variables are denoted by x;y;z;f;:::.

Atom is a function or variable symbol denoted by

a;a

0

;:::.The set T(§;V) of (untyped) terms con-

structed from § and V is the smallest set such that

a(t

1

;:::;t

n

) 2 T(§;V) whenever a 2 § [ V and

t

1

;:::;t

n

2 T(§;V).If n = 0,we write a instead of

a().The identity of terms is denoted by ´.We of-

ten write s(t

1

;:::;t

m

) for a(s

1

;:::;s

n

;t

1

;:::;t

m

),

where s ´ a(s

1

;:::;s

n

).We de¯ne the root symbol

of a term a(t

1

;:::;t

n

) by root(a(t

1

;:::;t

n

)) = a.

Var(t) is the set of variables in t.A term is said to

be closed if no variable occurs in the term.The set

of closed terms is denoted by T(§).The size jtj of t

is the number of function symbols and variables in

t.

A substitution µ is a mapping from variables to

terms.We may write µ as fx

1

:= µ(x

1

);:::;x

n

:=

µ(x

n

)g if fx

1

;:::;x

n

g = fx 2 V j µ(x) 6´

xg.Each substitution µ is naturally extended

over terms,denoted by

^

µ,as

^

µ(F(t

1

;:::;t

n

)) =

F(

^

µ(t

1

);:::;

^

µ(t

n

)) if F 2 §;

^

µ(z(t

1

;:::;t

n

)) =

a

0

(t

0

1

;:::;t

0

m

;

^

µ(t

1

);:::;

^

µ(t

n

)) if z 2 V with µ(z) =

a

0

(t

0

1

;:::;t

0

m

).For simplicity,we identify µ and

^

µ.

We write tµ instead of µ(t).A context is a term

which has a special symbol ¤,called a hole,at a

leaf position.We can also inductively de¯ne context

as follows:¤ is a context;a(:::;t

i¡1

;C

i

[ ];t

i+1

;:::)

is a context if a is an atom,C

i

[ ] is a context and

t

j

2 T(§;V) for any j(6= i).A su±x context is

a term which has the symbol ¤ at the root posi-

tion.We can also inductively de¯ne su±x context

as follows:¤ is a su±x context;(S

0

[ ])(u) is a su±x

context if S

0

[ ] is a su±x context and u 2 T(§;V).

For a context C[ ] (a su±x context S[ ]),C[t] (S[t])

1

In [17],UTRSs were called term rewriting systems with

high-order variables (TRS-HVs).Since there exists no

\higher-order variable"in untyped systems,we use UTRS

in the paper.

2

denotes the result of placing t in the hole of C[ ]

(S[ ]).A term t

0

is said to be a subterm of a term

t if there exists a context C[ ] such that t ´ C[t

0

].

We denote by Sub(t) all subterms of t,and de¯ne

s B

sub

t by t 2 Sub(s) and s 6´ t.

Example 2.1 For the term t ´ f(A;x) and the

substitution µ = ff:= G(0);x:= Bg,we have tµ ´

G(0;A;B).We can see that F(0;¤) and F(¤;xs)

are contexts,¤(0) and ¤(0;Nil) are su±x contexts,

and ¤ is both a context and a su±x context.We

can also see that C[t] ´ a(t;t

0

) for C[ ] ´ a(¤;t

0

),

and S[a(t)] ´ a(t;t

0

) for S[ ] ´ ¤(t

0

).

An equivalence relation'on terms is called a

congruence relation if it is closed under contexts and

su±x contexts,that is,s't )C[s]'C[t] ^S[s]'

S[t] for any context C[ ] and su±x context S[ ].

A rewrite rule is a pair (l;r) of terms such that

root(l) =2 V and Var(l) ¶ Var(r).We write

l!r for (l;r).An untyped term rewriting sys-

tem (UTRS) is a set of rewrite rules.The reduction

relation!

R

of R is de¯ned by s!

R

t i® s ´ C[S[lµ]]

and t ´ C[S[rµ]] for some l!r 2 R,C[ ],S[ ]

and µ.We often omit the subscript R whenever no

confusion arises.

Example 2.2 The Map-function is represented by

the following UTRS R:

½

Map(f;Nil)!Nil

Map(f;x::xs)!f(x)::Map(f;xs)

Note that we use the standard representation for list

structures by symbols::and Nil.We often write

lists in in¯x form.For example,::(x;::(y;Nil)) is

written as x::y::Nil.Then we have the following

reduction relation sequence.

Map(F;F(0)::0::Nil)

!

R

F(F(0))::Map(F;0::Nil)

!

R

F(F(0))::F(0)::Map(F;Nil)

!

R

F(F(0))::F(0)::Nil

2.3 Simply-Typed Term Rewriting

System

A simply-typed version of a UTRS is called a

simply-typed term rewriting system (STRS) [17].

A set of basic types (sorts) is denoted by B.The

set T of simple types is generated from B by the

constructor!,that is,T::= B j (T!T ).A type

attachment ¿ is a function from §[V to T.A term

a(t

1

;:::;t

n

) has type ¯ if ¿(a) = (®

1

!(¢ ¢ ¢!

(®

n

!¯) ¢ ¢ ¢ )) and each t

i

has type ®

i

.A term

t 2 T(§;V) is said to be simply-typed if it has a

simple-type.We denote the set of all simply-typed

terms by T

¿

(§;V).We de¯ne the order ord(®) of

a type ® as follows:ord(®) = 1 if ® 2 B,and

ord(®) = max(1 + ®

1

;®

2

) if ® = ®

1

!®

2

.A

variable is said to be a higher-order variable if it

has a simple type ® with ord(®) > 1.We use V

h

to

stand for the set of higher-order variables.A term

t is said to be ground

2

if t is closed and of basic

types.We denote all closed simply-typed terms by

T

¿

(§),all closed terms having a simple type ® by

T

®

(§),all basic-typed terms by T

B

(§;V),and all

ground terms by T

B

(§).We also denote the set of

all basic-typed subterms of t by Sub

B

(t)

To keep the type consistency,we assume that

¿(x) = ¿(µ(x)) for all x 2 V and substitutions µ.

We also prepare the hole ¤

®

with a simple type ®,

and for each context C[ ] (su±x context S[ ]) with

a hole ¤

®

we assume that ¿(t) = ® whenever we

denote C[t] (S[t]).

A simply-typed rewrite rule is a rewrite rule l!r

such that l and r have the same type.A simply-

typed term rewriting system (STRS) is a set of

simply-typed rewrite rules.

Example 2.3 The UTRS representation of the

Map-function in Example 2.2 is regarded as an

STRS with the type attachment ¿(Nil) = L,¿(::) =

N!L!L and ¿(Map) = (N!N)!L!L.

For an STRS R,GNF(R),GWN(R),GSN(R)

and GCR(R) are de¯ned as NF(R

0

),WN(R

0

),

SN(R

0

) and CR(R

0

) in ARS R

0

= hT

B

(§);!

R

i,re-

spectively.For STRSs R

1

and R

2

,GRET(R

2

;R

1

)

is de¯ned as that of ARSs hT

B

(§);!

R

1

i and

hT

B

(§);!

R

2

i.

2.4 First-Order TermRewriting Sys-

tem and Equational Logic

The ¯rst-order term rewriting system (TRS) is the

usual term rewriting system.In this subsection,we

introduce some notions needed later on.All results

stated in this subsection can be found in [4].

We suppose that each symbol F 2 § is associated

with a natural number n,denoted by ar(F) = n.

We also suppose that ar(x) = 0 for any variable

x.A term a(t

1

;:::;t

n

) 2 T(§;V) is said to be a

¯rst-order term if ar(a) = n and each t

i

is also

¯rst-order term.We denote the set of ¯rst-order

terms by T

ar

(§;V),and the set of closed ¯rst-order

terms by T

ar

(§).In the ¯rst order setting,closeness

coincides with groundness.Since ¤is the only su±x

2

It is a natural extension of groundness of ¯rst-order

framework,because all ¯rst-order closed terms are basic.It

is also useful for our purpose,because our inductive rea-

soning methods on STRSs are based on properties of closed

basic-typed terms (cf.Theorem 5.2,5.7 and 6.3).

3

context,an equivalence relation on ¯rst-order terms

is a congruence relation whenever it is closed under

contexts.

A ¯rst-order equation is a pair (s;t) of ¯rst-order

terms,written as s = t.We de¯ne V ar(s = t)

by V ar(s) [ Var(t).Let E be a set of ¯rst-order

equations.We denote by

¤

$

E

the congruence relation

generated from E.A ¯rst-order equation s = t is

said to be a theorem in E,denoted by E`s = t,if

it is deducible by inference rules in Fig.1 except for

(Functionality)-rule.A ¯rst-order equation s = t

is said to be an inductive theorem in E,denoted

by E`

ind

s = t,if it is deducible by inference

rules in Fig.1.Note that (Functionality)-rule is a

kind of meta-rule,and proof trees may be in¯nitely

branching.

(Assumption)

s = t

if s = t 2 E

(Re°exivity)

t = t

(Symmetry)

t = s

s = t

(Transitivity)

s = u u = t

s = t

(Substitutivity)

s = t

sµ = tµ

(Monotonicity)

s

1

= t

1

¢ ¢ ¢ s

n

= t

n

F(s

1

;:::;s

n

) = F(t

1

;:::;t

n

)

if ar(F) = n

(Functionality)

8u 2 T

ar

(§):sfz:= ug = tfz:= ug

s = t

Figure 1:Inference rules for the ¯rst-order equa-

tional logic

A §-algebra A is a pair hA;§

A

i,where A is

a carrier,§

A

is a mapping which maps each

F 2 § of arity n to a n-ary function F

A

on

A.The term algebra T

ar

(§;V) is the §-algebra

hT

ar

(§;V);§

T

ar

(§;V)

i,where each F

T

ar

(§;V)

is de-

¯ned as F

T

ar

(§;V)

(t

1

;:::;t

n

)

def

== F(t

1

;:::;t

n

).

In the case of V =;,the term algebra is said to

be the ground term algebra,denoted by T

ar

(§).

Let hA;§

A

i be a §-algebra.An equivalence re-

lation » on A is said to be a congruence rela-

tion if it is monotonic,that is,

V

n

i=1

a

i

» a

0

i

)

F

A

(a

1

;:::;a

n

) » F

A

(a

0

1

;:::;a

0

n

) for all F 2 § with

ar(F) = n.Suppose that hA;§

A

i is a §-algebra and

» is a congruence relation on A.We de¯ne [a]

»

=

fa

0

2 A j a

0

» ag and A=»= f[a]

»

j a 2 Ag.We

also de¯ne the quotient algebra A=»= hA=»;§

A=»

i

of A modulo »,where each F

A=»

is de¯ned as

F

A=»

([a

1

]

»

;:::;[a

n

]

»

) = [F

A

(a

1

;:::;a

n

)]

»

.It is

known that the quotient algebra A=»is well-de¯ned

as a §-algebra.Clearly,

¤

$

E

is a congruence relation

on T

ar

(§;V),and thus T

ar

(§;V)=

¤

$

E

is a §-algebra,

called a quotient term algebra.T

ar

(§)=

¤

$

E

is also a

§-algebra,called a quotient ground term algebra.

Let A = hA;§

A

i be a §-algebra.An assignment

into A is a mapping ¾ from V to A.The interpreta-

tion of a ¯rst-order term t by an assignment ¾ into

A,denoted by [[t]]

¾

,is de¯ned as [[x]]

¾

= ¾(x) for

any x 2 V;[[F(t

1

;:::;t

n

)]]

¾

= F

A

([[t

1

]]

¾

;:::;[[t

n

]]

¾

)

for any F 2 §.

Let s = t be a ¯rst-order equation,A a §-algebra

and ¾ an assignment into A.We denote A;¾ j=

s = t if [[s]]

¾

= [[t]]

¾

.We also denote A j= s = t if

[[s]]

¾

= [[t]]

¾

for any assignment ¾ into A.

Let A be a §-algebra and E be a set of ¯rst-order

equations.We say that A is a model of E,denoted

by A j= E,if A j= s = t for all s = t 2 E.We say

that s = t is a semantic consequence of E,denoted

by E j= s = t,if A j= s = t for any model A of E.

We denote by Alg

§

(E) the class of all models of E.

Proposition 2.4 ([4]) Let E be a set of ¯rst-order

equations and s = t be a ¯rst-order equation.Then

the following properties are equivalent.

(1) E j= s = t

(2) s

¤

$

E

t

(3) E`s = t

Let A = hA;§

A

i and B = hB;§

B

i be §-algebras.

A mapping Á from A to B is said to be a homomor-

phism if Á(F

A

(a

1

;:::;a

n

)) = F

B

(Á(a

1

);:::;Á(a

n

))

for any F 2 § and a

i

2 A.We denote by

Hom(A;B) the class of all homomorphisms from

A to B.Let K be a class of §-algebra.An algebra

I 2 K is said to be an initial algebra in K if for any

A 2 K there exists a unique homomorphism from I

to A,that is,jHom(I;A)j = 1.

4

Proposition 2.5 ([4]) Let E be a set of ¯rst-order

equations and s = t be a ¯rst-order equation.The

quotient ground termalgebra T

ar

(§)=

¤

$

E

is an initial

algebra in the class Alg

§

(E).Moreover the follow-

ing properties are equivalent.

(1) T

ar

(§)=

¤

$

E

j= s = t

(2) sµ

g

¤

$

E

tµ

g

for all ground substitution µ

g

(3) E`

ind

s = t

We often use the property (2) to de¯ne inductive

theorems.

3 Inductive Theorems

Basic ingredients of the theory of algebraic spec-

i¯cation are equational logic and its semantics is

based on §-algebra [10,12,13,14,24].Algebraic

speci¯cations for higher-order languages were stud-

ied in [19,20].In this section,based on Meinke's

formulation [19],we give a syntax of a higher-order

equational logic on STRSs,and de¯ne inductive

theorems in STRSs.

De¯nition 3.1 A simply-typed equation,written

by s = t,is a pair of simply-typed terms with

the same types (¿(s) = ¿(t)).We also denote

V ar(s) [Var(t) by V ar(s = t).

Note that STRSs are often regarded as simply-

typed equation sets.

Next we de¯ne theorems and inductive theorems

in the higher-order equational logic by using infer-

ence rules displayed in Fig.2.

De¯nition 3.2 Let E be a set of simply-typed

equations.A simply-typed equation s = t is said

to be a theorem in E,denoted by E`s = t,if

it is deducible by the inference rules in Fig.2 ex-

cept for (Functionality) and (Extensionality)-rules.

A simply-typed equation s = t is said to be an in-

ductive theorem in E,denoted by E`

ind

s = t,if it

is deducible by the inference rules in Fig.2.

The di®erences of inference rules between ¯rst-order

and higher-order settings are as follows:(Mono-

tonicity) and (Functionality)-rules are modi¯ed,

and (Extensionality)-rule is added.

Example 3.3 Consider the following STRS Rwith

¿(Rev) = ¿(Frev) = L!L,¿(Ap) = ¿(F) = L!

L!L,¿(::) = N!L!L and ¿(Nil) = L (this

system can also be considered as TRS):

8

>

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

>

:

Ap(Nil;xs)!xs

Ap(x::xs;ys)!x::Ap(xs;ys)

Rev(Nil)!Nil

Rev(x::xs)!Ap(Rev(xs);x::Nil)

F(Nil;xs)!xs

F(x::xs;ys)!F(xs;x::ys)

Frev(xs)!F(xs;Nil)

Note that the transformation from Rev to Frev is

a typical example of program optimization.

Both Rev and Frev in R represent the same list-

reverse function,and we have

R`

ind

Rev(xs) = Frev(xs);

R`

ind

Rev = Frev:

However the notion of theorems do not equate Rev

and Frev,that is,

R 0 Rev(xs) = Frev(xs);

R 0 Rev = Frev:

(Assumption)

s = t

if s = t 2 E

(Re°exivity)

t = t

(Symmetry)

t = s

s = t

(Transitivity)

s = u u = t

s = t

(Substitutivity)

s = t

sµ = tµ

(Monotonicity)

s

0

= t

0

¢ ¢ ¢ s

n

= t

n

s

0

(s

1

;:::;s

n

) = t

0

(t

1

;:::;t

n

)

if s

0

(s

1

;:::;s

n

) 2 T

¿

(§)

(Functionality)

8u 2 T

¿(z)

(§):sfz:= ug = tfz:= ug

s = t

(Extensionality)

s(z) = t(z)

s = t

if z =2 Var(s = t)

Figure 2:Inference rules for the higher-order equa-

tional logic

5

4 Primitive Inductive Theorems

In this section,we give the notion of primitive in-

ductive theorems,and study the relation between

primitive inductive theorems and inductive theo-

rems.

4.1 Characterizing Inductive The-

orems by Primitive Inductive

Theorems

The notion of primitive inductive theorems is a nat-

ural extension of the property (2) in Proposition

2.5.

De¯nition 4.1 Let R be a set of simply-typed

equations.A simply-typed equation s = t is said to

be a primitive inductive theorem in R,denoted by

R`

pind

s = t,if S

g

[sµ

c

]

¤

$

R

S

g

[tµ

c

] for all closed sub-

stitution µ

c

(i.e.8x 2 Var(s = t):µ

c

(x) 2 T

¿

(§))

and ground su±x context S

g

[ ] 2 T

B

(§[ f¤g).We

also denote R`

pind

E if R`

pind

s = t for all

s = t 2 E.

If an equation s = t has a basic type then we

have R`

pind

s = t () sµ

c

¤

$

R

tµ

c

for all closed

substitution µ

c

,since ¤ is the only su±x context

having a basic-typed hole.

Theorem 4.2 Let Rbe a set of simply-typed equa-

tions and s = t a simply-typed equation.Then

R`

pind

s = t )R`

ind

s = t

Proof.Suppose that S

g

[sµ

c

]

¤

$

R

S

g

[tµ

c

] for all

closed substitution µ

c

and ground su±x context

S

g

[ ].Let X = fu = v j R`

ind

u = vg.From

the de¯nition of!

R

,we have!

R

µ X by rules

(Assumption),(Substitutivity) and (Monotonicity).

Thus it follows that

¤

$

R

µ X from rules (Re-

°exivity),(Symmetry) and (Transitivity).Hence

S

g

[sµ

c

] = S

g

[tµ

c

] 2 X.Let S

g

[ ] ´ ¤(u

1

;:::;u

m

)

and S[ ] ´ ¤(z

1

;:::;z

m

) where z

1

;:::;z

m

are fresh

variables.We de¯ne µ

0

c

by µ

0

c

(z

i

) = u

i

(i = 1;:::;n)

and µ

0

c

(z) = µ

c

(z) for the other variables.Then

S

g

[sµ

c

] ´ S[s]µ

0

c

and S

g

[tµ

c

] ´ S[t]µ

0

c

.From

(Functionality)-rule,we have S[s] = S[t] 2 X.

From (Extensionality)-rule,we have s = t 2 X.

¤

Since the notion of primitive inductive theorems

is a natural extension of the property (2) in Propo-

sition 2.5,one might think that the simply-typed

version of Proposition 2.5 would be obtained.How-

ever this is not true;the inverse of Theorem4.2 does

not hold.

Example 4.3 Consider again the STRS R shown

in Example 3.3.Suppose that Apply 2 § with

¿(Apply) = (L!L)!L!L.Then we have

R`

ind

Apply(Rev;xs) = Apply(Frev;xs);

R 0

pind

Apply(Rev;xs) = Apply(Frev;xs):

From this example,some readers might guess

that inductive theorems coincide with the mono-

tonic closure of primitive inductive theorems.How-

ever,this is also not true.

Example 4.4 We consider the following STRS R:

8

<

:

I(x)!x

I

0

(x)!x

Apply(I;x)!I(x)

where ¿(I) = ¿(I

0

) = N!N and ¿(Apply) =

(N!N)!N!N.Suppose that X is

the monotonic closure of primitive inductive the-

orems in R,that is,X is the smallest set such

that R`

pind

s = t implies s = t 2 X,and

s

i

= t

i

2 X (i = 0;:::;n) implies s

0

(s

1

;:::;s

n

) =

t

0

(t

1

;:::;t

n

).Then Apply(I

0

;x) = Apply(I;x) 2

X follows from R`

pind

I

0

= I,and Apply(I;x) =

x 2 X holds.However X is not transitive because of

Apply(I

0

;x) = x =2 X.Since R`

ind

Apply(I

0

;x) =

x,the monotonic closure X of primitive inductive

theorems cannot characterize inductive theorems.

De¯nition 4.5 Let R be a set of simply-typed

equations and s = t a simply-typed equation.We

de¯ne R`

1

pind

s = t by R`

pind

s = t;R`

n+1

pind

s = t

by R

0

`

pind

s = t where R

0

= fu = v j R`

n

pind

u =

vg.

Theorem 4.6 Let Rbe a set of simply-typed equa-

tions and s = t a simply-typed equation.Then

R`

ind

s = t i® R`

n

pind

s = t for some n.

Proof.We prove that R`

n

pind

s = t implies R`

ind

s = t by induction on n.The case n = 1 is Theorem

4.2.Suppose that n > 1.Let R

0

= fu = v j R`

n¡1

pind

u = vg.Then R

0

`

ind

s = t follows from Theorem

4.2.From the induction hypothesis,R`

ind

R

0

.

Thus we have R`

ind

s = t.

We prove that R`

ind

s = t implies R`

n

pind

s = t

for some n by induction on the depth of the proof

tree of R`

ind

s = t.Suppose that s = t is de-

duced by an inference rule from a subproof P

0

,and

E (possibly empty) is the consequence of P

0

.In

the case that the inference rule is either (Assump-

tion) or (Re°exivity),R`

pind

s = t.In other cases,

R`

n

pind

E for some n follows from the induction

hypothesis,hence R`

n+1

pind

s = t.¤

6

4.2 Su±cient Condition

We are now going to explore a su±cient condition

for the inverse of Theorem4.2.This condition plays

an important roll for disproving inductive theorems

(see Section 6).

Before we present the condition,we prepare sev-

eral notions and lemmas.In the following,we

assume § is partitioned into D and C,that is,

§ = D [ C and D\C =;.Elements of D are

called de¯ned symbols,and those of C are called

constructors.

De¯nition 4.7 A term t 2 T(§) is said to be a

value if root(t

0

) 2 C for any t

0

2 Sub

B

(t).We denote

the set of all values by Val (C;D),and denote the set

of all basic-typed values by Val

B

(C;D).An STRS

R is said to be quasi-reducible,denoted by QR(R),

if any basic-typed termF(t

1

;:::;t

n

) is not a normal

form whenever t

1

;:::;t

n

2 Val (C;D) and F 2 D.

De¯nition 4.8 A term t 2 T(C;V

h

) is said to

be a pseudo-value if any variable occurrence is at

a leaf position.We denote all pseudo-values by

PVal (C;V

h

).An STRS R is said to be strongly

quasi-reducible,denoted by SQR(R),if any basic-

typed term F(t

1

;:::;t

n

) is reducible whenever

t

1

;:::;t

n

2 PVal (C;V

h

) and F 2 D.

Example 4.9 Let C = f0;Pg,D = fAddg and

f;g 2 V such that ¿(0) = N,¿(P) = (N!N)!

N!N,¿(Add) = ¿(g) = N!N!N and

¿(f) = N!N.Then 0,Add and P(Add(0);0) are

values;0,g and P(f;0) are pseudo-values;Add(0;0)

is neither a value nor a pseudo-value.

Lemma 4.10 If SQR(R) then GNF(R) µ

Val

B

(C;D).

Proof.From the de¯nitions of GNF and Val

B

,we

have GNF(R) µ T

B

(§) and Val

B

(R) µ T

B

(§).

Hence it su±ces to show t =2 Val

B

(C;D) ) t =2

GNF(R) for all t 2 T

B

(§).

Suppose that t 2 T

B

(§) and t =2 Val

B

(C;D).

Let u be a minimal size term in Sub

B

(t) such that

root(u) = F 2 D.Let u

1

;:::;u

n

be terms such

that u ´ C[u

1

;:::;u

n

],each root(u

i

) is a de¯ned

symbol and C[ ] is a constructor context except

for the root position.By the minimality of u,

each u

i

is not of basic types.Hence there exist

v

1

;:::;v

m

2 PVal (C;V

h

) such that F(v

1

;:::;v

m

) ´

C[z

1

;:::;z

n

] where z

j

are distinct fresh variables.

From SQR(R),C[z

1

;:::;z

n

] is reducible.Hence

C[u

1

;:::;u

n

] is reducible,which implies the re-

ducibility of t.Thus t =2 GNF(R).¤

De¯nition 4.11 An STRS R is said to be su±-

cient complete,denoted by SC(R),if 8t 2 T

B

(§):

9v 2 Val

B

(C;D):t

¤

!

R

v.

Lemma 4.12 If GWN(R) and SQR(R) then

SC(R) holds.

Proof.Let t 2 T

B

(§).By GWN(R),t

¤

!

R

u

for some u 2 GNF(R).By Lemma 4.10,u 2

Val

B

(C;D).¤

De¯nition 4.13 If ord(¿(c)) · 2 for any c 2 C

then we say that the set C of constructors has a

¯rst-order structure.

Note that for any simply-typed term c(t

1

;:::;t

n

)

with c 2 C,each t

i

is of basic types whenever C has

a ¯rst-order structure.

Lemma 4.14 If C has a ¯rst-order structure then

Val

B

(C;D) = T

B

(C).

Proof.Val

B

(C;D) ¶ T

B

(C) is trivial.Assume that

Val

B

(C;D)nT

B

(C) 6=;.Let t be a minimal size term

in Val

B

(C;D) n T

B

(C),and t ´ a(t

1

;:::;t

n

).We

have a 2 C because of t 2 Val

B

(C;D).Since C has

a ¯rst-order structure,each t

i

is of basic types,and

hence 8i:t

i

2 Val

B

(C;D).From the minimality of

t,each t

i

is in T

B

(C).Hence a(t

1

;:::;t

n

) 2 T

B

(C).

It is a contradiction.¤

Lemma 4.15 Let R be an STRS.If the following

conditions hold:

(a) GSN(R),

(b) GCR(R),

(c) SQR(R),

(d) C has a ¯rst-order structure,and

(e) l =2 T

¿

(C;V) for any l!r 2 R,

then R`

pind

tµ

c

= tµ

0

c

for any t 2 T

B

(§;V) and

any closed substitutions µ

c

and µ

0

c

such that R`

pind

zµ

c

= zµ

0

c

for all z 2 Var(t).

Proof.We prove the claim by induction on tµ

c

with respect to (!

R

[B

sub

)

+

.Note that the well-

foundedness on T

B

(§) of (!

R

[B

sub

)

+

is guaranteed

by the condition (a).Let t ´ a(t

1

;:::;t

n

).There

are three cases.

² Suppose that a 2 C.Since each t

i

is of basic

types from the condition (d),we have the induc-

tion hypothesis R`

pind

t

i

µ

c

= t

i

µ

0

c

for each i.

Hence t

i

µ

c

¤

$

R

t

i

µ

0

c

,which implies tµ

c

¤

$

R

tµ

0

c

.Thus

R`

pind

tµ

c

= tµ

0

c

.

7

² Suppose that a 2 D.Consider t

i

of basic types.

From the induction hypothesis,R`

pind

t

i

µ

c

=

t

i

µ

0

c

,that is t

i

µ

c

¤

$

R

t

i

µ

0

c

.Since we have SC(R) from

conditions (a) and (c),and Lemma 4.12,there ex-

ist ground constructor terms u

i

and v

i

such that

t

i

µ

c

¤

!

R

u

i

and t

i

µ

0

c

¤

!

R

v

i

by the condition (d) and

Lemma 4.14.Here,we have u

i

;v

i

2 GNF(R)

from the condition (e).Hence u

i

´ v

i

2 T

¿

(C)

follows from the condition (b).

For each t

i

of higher-order types,let u

i

´ v

i

´ f

i

where f

i

is a fresh variable.Let ¾

0

= ff

i

:= t

i

j t

i

is of higher-order types g.

Now we have

tµ

c

´ a(t

1

µ

c

;:::;t

n

µ

c

)

¤

!

R

a(u

1

¾

0

µ

c

;:::;u

n

¾

0

µ

c

)

´ a(u

1

;:::;u

n

)¾

0

µ

c

:

By the condition (c),there exist l!r 2 R and ¾

such that l¾ ´ a(u

1

;:::;u

n

).Hence,we have

a(u

1

;:::;u

n

)¾

0

µ

c

´ l¾¾

0

µ

c

!

R

r¾¾

0

µ

c

:

Similarly,we have tµ

0

c

+

!

R

r¾¾

0

µ

0

c

.From the induc-

tion hypothesis,we have R`

pind

r¾¾

0

µ

c

= r¾¾

0

µ

0

c

,

which implies R`

pind

tµ

c

= tµ

0

c

.

² Suppose that a 2 V.Let t

0

´ (aµ

c

)(t

1

;:::;t

n

).

We consider the equation t

0

µ

c

= t

0

µ

0

c

.Since

tµ

c

´ t

0

µ

c

and root(t

0

) 2 C [D,R`

pind

t

0

µ

c

= t

0

µ

0

c

follows from previous cases.Hence tµ

c

´ t

0

µ

c

¤

$

R

t

0

µ

0

c

´ (aµ

c

)(t

1

µ

0

c

;:::;t

n

µ

0

c

).Since R`

pind

aµ

c

= aµ

0

c

,by taking a ground su±x context

¤(t

1

µ

0

c

;:::;t

n

µ

0

c

),we have (aµ

c

)(t

1

µ

0

c

;:::;t

n

µ

0

c

)

¤

$

R

(aµ

0

c

)(t

1

µ

0

c

;:::;t

n

µ

0

c

) ´ tµ

0

c

.Hence R`

pind

tµ

c

= tµ

0

c

.¤

Lemma 4.16 Let R be an STRS.Suppose that

conditions (a){(e) in Lemma 4.15 hold.If

R`

pind

s

i

= t

i

(i = 0;1;:::;n) then R`

pind

s

0

(s

1

;:::;s

n

) = t

0

(t

1

;:::;t

n

).

Proof.We show that

S

g

[s

0

(s

1

;:::;s

n

)µ

c

]

¤

$

R

S

g

[t

0

(t

1

;:::;t

n

)µ

c

]

for any closed substitution µ

c

and ground su±x con-

text S

g

[ ].Let s

0

i

´ s

i

µ

c

,t

0

i

´ t

i

µ

c

(i = 0;1;:::;n)

and S

g

[ ] ´ ¤(s

0

n+1

;:::;s

0

m

) ´ ¤(t

0

n+1

;:::;t

0

m

).

From the assumption,R`

pind

s

0

i

= t

0

i

for i =

0;1;:::;m.Let µ and µ

0

be substitutions such that

µ(z

i

) = s

0

i

and µ

0

(z

i

) = t

0

i

for each i.Clearly,

µ and µ

0

are closed and for all i = 0;:::;n,

R`

pind

z

i

µ = z

i

µ

0

.From Lemma 4.15,R`

pind

z

0

(z

1

;:::;z

m

)µ = z

0

(z

1

;:::;z

m

)µ

0

,that is,R`

pind

S

g

[s

0

(s

1

;:::;s

n

)µ

c

] = S

g

[t

0

(t

1

;:::;t

n

)µ

c

].Hence

S

g

[s

0

(s

1

;:::;s

n

)µ

c

]

¤

$

R

S

g

[t

0

(t

1

;:::;t

n

)µ

c

].¤

Theorem 4.17 Let R be an STRS.Suppose that

conditions (a){(e) in Lemma 4.15 hold.Then

R`

pind

s = t () R`

ind

s = t

Proof.The ())-part is Theorem 4.2.Suppose

R`

ind

s = t.We prove R`

pind

s = t by induction

on the depth of the proof tree of R`

ind

s = t.In the

case that s = t is deduced by (Monotonicity)-rule,

R`

pind

s = t follows from the induction hypothesis

and Lemma 4.16.Other cases are routine.¤

This theorem plays an important role in the im-

plicit induction methods for disproving inductive

theorems (Theorem 6.3).

5 Proving Inductive Theorems

Implicit induction methods are intended to prove

inductive theorems.In the ¯rst-order setting,two

kind of implicit induction methods are known:in-

ductionless induction and rewriting induction [6,7,

11,16,21,22,23].In this section,we formulate

some methods to prove primitive inductive theo-

rems using the results in [16,23].These methods

are also successfully applied to prove inductive the-

orems using Theorem 4.2.

5.1 Inductionless Induction

In this subsection,we state how to apply the induc-

tionless induction method in [23] to STRSs.

Proposition 5.1 ([23]) Let R

1

= hA;!

1

i and

R

2

= hA;!

2

i be ARSs.Suppose that all of the

following conditions hold:

(i)!

1

µ

¤

$

2

(ii) WN(R

1

)

(iii) CR(R

2

)

(iv) NF(R

1

) µ NF(R

2

)

Then we have

¤

$

1

=

¤

$

2

.

Based on this proposition,we show an abstract

theorem for proving inductive theorems.

Theorem 5.2 Let R and R

0

be STRSs,and E be

a set of equations.Suppose that all of the following

conditions hold:

(i)

¤

$

R[E

µ

¤

$

R

0

in T

B

(§)

(ii) GWN(R)

(iii) GCR(R

0

)

8

(iv) GNF(R) µ GNF(R

0

)

Then R`

pind

E and hence R`

ind

E.

Proof.From the condition (i),we have!

R

µ

¤

$

R

0

in T

B

(§).By applying Proposition 5.1 with T

B

(§)

as A,we have

¤

$

R

=

¤

$

R

0

in T

B

(§).Hence

¤

$

E

µ

¤

$

R

in

T

B

(§),which implies R`

pind

E.We have R`

ind

E

by Theorem 4.2.¤

We prepare a lemma for checking GNF(R) µ

GNF(R

0

).

Lemma 5.3 Let R and R

0

be STRSs.Suppose

that for any l!r 2 R

0

there exists l

0

2 Sub

B

(l)

such that root(l

0

) 2 D.Then Val

B

(C;D) µ

GNF(R

0

).Moreover,if SQR(R) additionally holds

then GNF(R) µ GNF(R

0

) holds.

Proof.Assume that t 2 Val

B

(C;D) and t =2

GNF(R

0

).Then t ´ C[S[lµ]] for some

C[ ];S[ ];µ;l!r 2 R.By assumption,there ex-

ists l

0

2 Sub

B

(l) such that root(l

0

) 2 D.Then

l

0

µ 2 Sub

B

(t) and root(l

0

µ) 2 D.This contradicts

t =2 Val

B

(C;D).

Suppose that SQR(R) additionally holds.From

Lemma 4.10,GNF(R) µ Val

B

(C;D).Hence

GNF(R) µ GNF(R

0

).¤

Example 5.4 We consider the following STRS R:

8

>

>

<

>

>

:

Map(f;Nil)!Nil

Map(f;x::xs)!f(x)::Map(f;xs)

Ap(Nil;xs)!xs

Ap(x::xs;ys)!x::Ap(xs;ys)

Suppose that C = f0;S;Nil;::g,D = fAp;Mapg

and ¿(0) = N,¿(S) = N!N,¿(Nil) = L,¿(::) =

N!L!L,¿(Ap) = L!L!L and ¿(Map) =

(N!N)!L!L.Based on Theorem 5.2,we

prove that the following equation is an inductive

theorem in R:

Map(f;Ap(xs;ys)) = Ap(Map(f;xs);Map(f;ys))

Let R

0

be the union of R and the above equa-

tion.We can prove SN(R) by the recursive path

order in [18],and CR(R

0

) by the critical pair cri-

terion.Hence GWN(R) and GCR(R

0

) hold.Since

SQR(R) hold,and for any l!r 2 R

0

there ex-

ists l

0

2 Sub

B

(l) such that root(l

0

) 2 D,the in-

clusion GNF(R) µ GNF(R

0

) follows from Lemma

5.3.Therefore conditions (i){(iv) of Theorem 5.2

hold and thus the equation above is an inductive

theorem in R.

Example 5.5 We consider the following STRS R:

8

>

>

>

>

>

>

>

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

>

>

>

>

>

>

>

:

I(x)!x

^(T;y)!y

^(F;y)!F

Ands(Nil)!T

Ands(T::xs)!Ands(xs)

Ands(F::xs)!F

8(p;Nil)!T

8(p;x::xs)!^(p(x);8(p;xs))

Map(f;Nil)!Nil

Map(f;x::xs)!f(x)::Map(f;xs)

Suppose that ¿(T) = ¿(F) = B,¿(Nil) = L,¿(::

) = B!L!L,¿(I) = B!B,¿(^) = B!B!

B,¿(Ands) = L!B,¿(8) = (B!B)!L!B,

and ¿(Map) = (B!B)!L!L.In the way

similar to Example 5.4,we can prove:

R`

ind

8(I;xs) = Ands(Map(I;xs))

5.2 Rewriting Induction

In this subsection,we apply the rewriting induction

method,proposed in [22] and formalized in [16],to

STRSs.

Proposition 5.6 ([16]) Let R

1

= hA;!

1

i and

R

2

= hA;!

2

i be ARSs.Suppose that all of the

following conditions hold:

(i)!

1

µ

+

!

2

(ii) SN(R

2

)

(iii) RET(R

2

;R

1

)

Then we have

¤

$

1

=

¤

$

2

.

Theorem 5.7 Let R and R

0

be STRSs,and E be

a set of equations.Suppose that all of the following

conditions hold:

(i)!

R

µ

+

!

R

0

and

¤

$

E

µ

¤

$

R

0

hold in T

B

(§)

(ii) GSN(R

0

)

(iii) GRET(R

0

;R)

Then R`

pind

E and R`

ind

E.

Proof.By applying Proposition 5.6 with T

B

(§) as

A,we have

¤

$

R

=

¤

$

R

0

in T

B

(§).Hence

¤

$

E

µ

¤

$

R

in

T

B

(§),which implies R`

pind

E.Hence we have

R`

ind

E by Theorem 4.2.¤

By using the rewriting induction,we can also

prove inductive theorems in Example 5.4 and 5.5.

9

6 Disproving Inductive Theorems

For program veri¯cation,not only proving induc-

tive theorems but also disproving ones is important.

In this section,we present an automated reasoning

method for disproving inductive theorems,based on

the methods proposed in [11,21] and formalized in

[16].

Proposition 6.1 ([16]) Let R

1

= hA;!

1

i and

R

2

= hA;!

2

i be ARSs.Suppose that all of the

following conditions hold:

(i)!

1

µ

+

!

2

(ii) SN(R

2

)

(iii) CR(R

1

)

(iv) NF(R

1

) * NF(R

2

)

Then we have

¤

$

1

6=

¤

$

2

.

Based on this abstract result,we design implicit

induction methods for disproving primitive induc-

tive theorems.

Theorem 6.2 Let R and R

0

be STRSs,and E be

a set of equations.Suppose that all of the following

conditions hold:

(i)!

R

µ

+

!

R

0

and

¤

$

R[E

=

¤

$

R

0

hold in T

B

(§)

(ii) GSN(R

0

)

(iii) GCR(R)

(iv) GNF(R) * GNF(R

0

)

Then we have R 0

pind

E.

Proof.From Proposition 6.1,we have

¤

$

R

6=

¤

$

R

0

in

T

B

(§).Assume that R`

pind

E.Then it is easily

seen that

¤

$

E

µ

¤

$

R

in T

B

(§).Hence

¤

$

R

=

¤

$

R[E

=

¤

$

R

0

in T

B

(§).It is a contradiction.¤

We have already presented a su±cient condition

of primitive inductive theorems to be inductive the-

orems (Theorem 4.17).It means that R 0

pind

E

guarantees R 0

ind

E provided that R satis¯es con-

ditions (a){(e) of Lemma 4.15.From this fact,we

can use the implicit induction method to disprove

inductive theorems.

Theorem 6.3 Let R and R

0

be STRSs,and E be

a set of equations.Suppose that in addition to the

properties (i),(ii),(iii) and (iv) in Theorem 6.2,all

of the following properties hold:

(v) SQR(R),

(vi) C has a ¯rst-order structure,and

(vii) l =2 T

¿

(C;V) for any l!r 2 R.

Then we have R 0

ind

E.

Proof.From Theorems 6.2 and 4.17.Note that

GSN(R

0

) implies GSN(R).¤

Example 6.4 Let R be the following STRS:

½

Add(x;0)!0

Add(x;S(y))!S(Add(x;y))

where D = fAddg and C = f0;Sg.We prove

R 0

ind

Add(0) = S

based on Theorem 6.3.Let E = fAdd(0) = Sg and

R

0

= R[fAdd(0;z)!S(z);S(0)!0g.Since 8t 2

T

B

(§):Add(0;t) $

E

S(t) and S(0) $

E

Add(0;0)!

R

0,

we have

¤

$

R[E

¶

¤

$

R

0

in T

B

(§).From the construc-

tion of R

0

,we have!

R

µ

+

!

R

0

and

¤

$

R[E

µ

¤

$

R

0

hold

in T

B

(§).Hence the condition (i) holds.The

conditions (v),(vi) and (vii) can be easily shown.

We can prove SN(R

0

) by the recursive path order

in [18],and CR(R) by the critical pair criterion.

Hence GSN(R

0

) and GCR(R) hold.The condi-

tion (iv) holds because S(0) 2 GNF(R) and S(0) =2

GNF(R

0

).Therefore we have R 0

ind

Add(0) = S.

Example 6.5 Let R be the following STRS:

8

>

>

>

>

>

>

>

>

>

>

>

>

>

>

<

>

>

>

>

>

>

>

>

>

>

>

>

>

>

:

I(x)!x

^(T;y)!y

^(F;y)!F

Ands(Nil)!T

Ands(T::xs)!Ands(xs)

Ands(F::xs)!F

8(p;Nil)!F

8(p;x::xs)!^(p(x);8(p;xs))

Map(f;Nil)!Nil

Map(f;x::xs)!f(x)::Map(f;xs)

This example is slightly di®erent fromExample 5.5,

that is,8(p;Nil)!F

,which is problematic.Here,

we prove

R 0

ind

8(I;xs) = Ands(Map(I;xs))

based on Theorem 6.3.Let E = f8(I;xs) =

Ands(Map(I;xs))g and R

0

= R [ f8(I;xs)!

Ands(Map(I;xs));F!Tg.Since F Ã

R

8(I;Nil)

$

E

Ands(Map(I;Nil))!

R

Ands(Nil)!

R

T,we have

¤

$

R[E

=

¤

$

R

0

in T

B

(§).Hence the condition (i) holds.

The conditions (v),(vi) and (vii) trivially hold.

10

We can prove SN(R

0

) by the recursive path or-

der in [18],and CR(R) by the critical pair crite-

rion.Hence GSN(R

0

) and GCR(R) hold.The con-

dition (iv) holds because F 2 GNF(R) and F =2

GNF(R

0

).Therefore we have R 0

ind

8(I;xs) =

Ands(Map(I;xs)).

7 Characterizing Inductive Theo-

rems

Algebraic speci¯cation is based on equational logic

whose semantics is given on §-algebra [10,12,13,

14,24].Higher-order theories have also been con-

sidered in [19,20].

In Section 3,we gave a syntax of a higher-order

equational logic,and the de¯nition of inductive

theorems based on Meinke's formulation [19].To

justify our de¯nition,we should give a semantic

counter parts and show that our inductive theo-

rems can be characterized by the initial extensional

semantics,because our framework (STRSs) is dif-

ferent from Meinke's one.

7.1 Syntax

From this subsection to subsection 7.3,we study

higher-order equational logic on untyped systems

(UTRSs).For any set E of untyped equations and

an untyped equation s = t,we de¯ne E`s = t

and E`

ind

s = t as similar to ones in simply-typed

systems except for type constraints.

Here we prepare for a lemma needed later on.

Lemma 7.1 Let E be a set of equations.Then

E`s = t () s

¤

$

E

t.

Proof.()) By induction on the depth of proof

trees.(() It su±ces to show that E`s = t when-

ever s!

E

t.Let s ´ C[S[e

1

µ]] and t ´ C[S[e

2

µ]].

The equation e

1

= e

2

is deducible by (Assumption)-

rule.e

1

µ = e

2

µ is deduced by (Substitutivity)-rule.

S[e

1

µ] = S[e

2

µ] is deduced by (Monotonicity)-rule.

Finally,we obtain E`C[S[e

1

µ]] = C[S[e

2

µ]] by

(Monotonicity)-rule.¤

7.2 §

@

-algebra

We present the semantic counter parts by curried

terms,which is one of the most standard way for

handling higher-order function in ¯rst-order sys-

tems.For example,the Map-function is repre-

sented in the ¯rst-order term rewriting as follows:

8

<

:

@(@(Map;f);Nil)!Nil

@(@(Map;f);@(@(Cons;x);xs))

!@(@(Cons;@(f;x));@(@(Map;f);xs))

De¯nition 7.2 We de¯ne §

@

= §[ f@g,and de-

note by T

@

(§;V) the set of ¯rst-order terms gener-

ated by §

@

and V under ar(@) = 2 and ar(a) = 0

for any symbol a (6= @).We also denote the term

algebra T

ar

(§

@

;V) by T

@

(§;V).The ground term

algebra T

@

(§;;) is denoted by T

@

(§).

De¯nition 7.3 For any untyped termt 2 T(§;V),

we inductively de¯ne ¯rst-order term t

@

2

T

@

(§;V) as follows:

² a

@

= a for any a 2 §[ V

² a(t

1

;:::;t

n

)

@

= @(a(t

1

;:::;t

n¡1

)

@

;t

@

n

) if

n ¸ 1

We naturally extend the notion over substitutions

as µ

@

(x) = (µ(x))

@

,and over sets of pairs (like

equations or rules) as E

@

= f(s

@

;t

@

) j (s;t) 2 Eg.

We notice that T

@

(§;V) = ft

@

j t 2 T(§;V)g,

and any context on T

@

(§;V) has a form C

@

[S

@

[ ]]

for some context C[ ] and su±x context S[ ] on

T(§;V).

Lemma 7.4 C[S[tµ]]

@

´ C

@

[S

@

[t

@

µ

@

]]

Proof.Firstly we prove (tµ)

@

´ t

@

µ

@

by induc-

tion on jtj.Let t ´ a(t

1

;:::;t

n

).The case

n = 0 is trivial.Suppose that n > 0 and

u ´ a(t

1

;:::;t

n¡1

).Then (tµ)

@

´ (u(t

n

)µ)

@

´

((uµ)(t

n

µ))

@

´ @((uµ)

@

;(t

n

µ)

@

) ´ @(u

@

µ

@

;t

@

n

µ

@

)

´ @(u

@

;t

@

n

)µ

@

´ t

@

µ

@

.

We can prove S[tµ]

@

´ S

@

[(tµ)

@

] by induction on

S[ ],and C[S[tµ]]

@

´ C

@

[S[tµ]

@

] by induction on

C[ ].Hence we obtain C[S[tµ]]

@

´ C

@

[S

@

[t

@

µ

@

]].

¤

Lemma 7.5 Let E be a set of equations.Then

s!

E

t () s

@

!

E

@

t

@

.

Proof.()) The claim follows from Lemma 7.4.

(() Let s

@

!

E

@

t

@

.Then s

@

´ C

@

[S

@

[e

@

1

µ

@

]] and

t

@

´ C

@

[S

@

[e

@

2

µ

@

]] for some C

@

[ ],S

@

[ ],µ

@

and

e

@

1

= e

@

2

2 E

@

,because any context on T

@

(§;V)

has a form C

@

[S

@

[ ]].From Lemma 7.4,s ´

C[S[e

1

µ]] and t ´ C[S[e

2

µ]].Hence s!

E

t.¤

De¯nition 7.6 Let s = t be an equation and E

be a set of equations.For any §

@

-algebra A,we

denote A j=

@

s = t instead of A j= s

@

= t

@

.We

also denote E j=

@

s = t if A j=

@

s = t for any

model A of E

@

.

Thanks to Lemma 7.1,Lemma 7.5 and Proposi-

tion 2.4,we obtain the following theorem:

Theorem 7.7 Let E be a set of equations and s =

t be an equation.Then E`s = t () E j=

@

s =

t.

11

7.3 Initial Extensional Model

In ¯rst-order algebraic speci¯cation,inductive the-

orems are characterized by the initial algebra se-

mantics,that is,characterized in T

ar

(§)=

¤

$

E

,which

is an initial algebra in Alg

§

(E).

The initial algebra semantics cannot character-

ize inductive theorems in higher-order settings,be-

cause the extensionality is built in the syntax.In

this subsection,we show that there exists an ini-

tial extensional model,which characterizes induc-

tive theorems.For any t

@

2 T

@

(§),we denote its

interpretation by [[t

@

]],because [[t

@

]]

¾

is independent

of assignment ¾.

De¯nition 7.8 A §

@

-algebra A = hA;§

A

i is said

to be extensional if 8t

@

2 T

@

(§):@

A

(a;[[t

@

]]) =

@

A

(a

0

;[[t

@

]]) )a = a

0

for all a;a

0

2 A.Let s = t be

an equation and E be a set of equations.We denote

by Alg

´

§

@

(E

@

) the class of all extensional model of

E

@

.We denote E j=

´

@

s = t if A j=

@

s = t for all

A 2 Alg

´

§

@

(E

@

).

Theorem 7.9 We de¯ne s

@

'

E

t

@

as E`

ind

s =

t.The quotient ground term algebra T

@

(§)='

E

is

an initial algebra in Alg

´

§

@

(E).

Proof.It follows easily from the de¯nition of in-

ductive theorems that'

E

is a congruence re-

lation.Hence the quotient algebra T

@

(§)='

E

is well-de¯ned.Let A 2 Alg

´

§

@

(E

@

).Since

T

@

(§) = T

ar

(§

@

),T

@

(§)=

¤

$

E

@

is an initial algebra in

Alg

§

@(E

@

) by Proposition 2.5.Since Alg

´

§

@

(E

@

) µ

Alg

§

@(E

@

),we have the unique homomorphism Ã

from T

@

(§)=

¤

$

E

@

to A.

From Lemmata 7.1 and 7.5,E`s = t ()

s

@

¤

$

E

@

t

@

.Since E`s = t implies E`

ind

s = t,

¤

$

E

@

µ'

E

holds.Hence there exists a projection p 2

Hom(T

@

(§)=

¤

$

E

@

;T

@

(§)='

E

),which maps [t]

¤

$

E

@

to

[t]

'

E

.

Here we show that jHom(T

@

(§)='

E

;A)j · 1.

Assume that Ã

1

;Ã

2

2 Hom(T

@

(§)='

E

;A) such

that Ã

1

6= Ã

2

.Since p is surjective,Ã

1

¢ p 6= Ã

2

¢ p.

Since Ã

1

¢ p;Ã

2

¢ p 2 Hom(T

@

(§)=

¤

$

E

@

;A),Ã

1

¢ p =

Ã = Ã

2

¢ p.It is a contradiction.

It remains to show that there exists a homo-

morphism Ã

0

2 Hom(T

@

(§)='

E

;A).For this,it

su±ces to show s

@

'

E

t

@

implies Ã([s

@

µ

@

g

]) =

Ã([t

@

µ

@

g

]) for any ground substitution µ

@

g

,because

it guarantees the existence Ã

0

such that Ã

0

¢ p = Ã.

The proof proceeds by induction on the depth of

proof trees for E`

ind

s = t.We prove only cases

of (Functionality) and (Extensionality).

Suppose that s = t is deduced by (Functionality)-

rule.Fromthe induction hypothesis,8u

@

2 T

@

(§):

Ã([s

@

fz:= u

@

g¾

@

g

]) = Ã([t

@

fz:= u

@

g¾

g

]

@

) for

all ground substitution ¾

@

g

.Hence Ã([s

@

µ

@

g

]) =

Ã([t

@

µ

@

g

]) for all ground substitution µ

@

g

.

Suppose that s = t is deduced by

(Extensionality)-rule.From the induc-

tion hypothesis,9z =2 Var(s = t).

Ã([@(s

@

;z)µ

@

g

]) = Ã([@(t

@

;z)µ

@

g

]) for all ground

substitution µ

@

g

.Since Ã is a homomorphism,

@

A

(Ã([s

@

µ

@

g

]);Ã([zµ

@

g

])) = Ã([@(s

@

µ

@

g

;zµ

@

g

)])

= Ã([@(t

@

µ

@

g

;zµ

@

g

)]) = @

A

(Ã([t

@

µ

@

g

]);Ã([zµ

@

g

]))

for all ground substitution µ

@

g

.Since A is an

extensional model,Ã([s

@

µ

@

g

]) = Ã([t

@

µ

@

g

]) for any

ground substitution µ

@

g

.¤

p

A

T

@

(§)='

E

T

@

(§)=$

¤

E

@

Ã

0

Ã

Figure 3:proof of Theorem 7.9

7.4 Simply-Typed Systems

In algebraic speci¯cation,type information is very

important and useful.In this last subsection,we

discuss how to incorporate our previous results

based on untyped systems into simply-typed sys-

tems.Since our higher-order equational logic is de-

signed independently of type structure,it is easy

to combine with type systems.To be precise,we

need the following simply-typed constraint for a

§

@

-algebra A:

² there exists a type attachment ¿

A

on A such

that

{ ¿

A

(F

A

) = ¿(F) for any F 2 §,and

{ ¿

A

(@

A

(a

1

;a

2

)) = ¯ if ¿

A

(a

1

) = ®!¯

and ¿

A

(a

1

) = ®.

We also restrict ¾ to run over all the assignments

with

² ¿

A

(¾(x)) = ¿(x) for any x 2 V

when de¯ning j=

@

for simply typed case.

Under the simply-typed constraint and the re-

striction for assignments,any properties in this sec-

tion still holds on simply-typed systems.Hence in-

ductive theorems with simply-typed systems can be

characterized by the initial extensional semantics,

and the simply-typed version of Proposition 2.5 is

obtained.

12

Theorem 7.10 Let E be a set of simply-typed

equations and s = t be a simply-typed equation.

We de¯ne s

@

'

E

t

@

as E`

ind

s = t.Then the

quotient ground term algebra T

@

¿

(§)='

E

is an ini-

tial algebra in the class of all extensional model of

E.Moreover the following properties are equiva-

lent.

(1) T

@

¿

(§)='

E

j=

@

s = t

(2) E`

n

pind

s = t for some n

(3) E`

ind

s = t

This theorem shows that inductive theorems can

be characterized by the initial extensional seman-

tics.

8 Concluding Remarks

Under simply-typed systems,higher-order theories

have also been studied in [19,20].Our syntactical

de¯nition is based on one in [19],and §

@

-algebra

is based on one in [20].However,for initial algebra

semantics,minimality (w.r.t.subalgebra relation)

is required in [19],and reachability is required in

[20].On the other hand,our algebra does not re-

quire any such restrictions.It is a future subject

to study why such a di®erence comes out.More-

over,since our algebra is designed independently of

type structure,it is so easy to combine with arbi-

trary type systems.It is important to incorporate

not only simply-typed systems but also complicated

ones like polymorphic-typed systems.

We would feel that our proving and disproving

methods for primitive inductive theorems (Theorem

5.2,5.7 and 6.3) were natural extensions of results

in [23,16].Since these approach can apply only

for the class of primitive inductive theorems,it is a

critical problem for disproving inductive theorems.

To overcome the di±culty,we presented a su±cient

condition which guarantees that inductive theorems

and primitive ones coincide (Theorem 4.17).The

su±cient condition is indispensable for disproving

inductive theorems (Theorem 6.3).

A higher-order Knuth-Bendix procedure and its

application to inductionless induction were imple-

mented by the ¯rst author Kusakari,as a post-

doctorate sub-theme at the Japan Advanced Insti-

tute of Science and Technology (JAIST) in 1999.

Example 5.4,which presents an application to in-

ductionless induction (Theorem 5.2),was presented

at that time.Results similar to these with respect

to inductionless induction were also presented in

2003 by Aoto,Yamada and Toyama [1].Unfor-

tunately,all of the above results confused the dif-

ference between inductive theorems and primitive

inductive theorems,and erroneously used the term

\inductive theorems".In this paper,we make a

clear distinction between the concepts of inductive

theorems and primitive inductive ones.At the re-

quest of Toyama,Kusakari presented a lecture on

the results in this paper to Aoto and Yamada in Au-

gust 2003.In the following year,Aoto,Yamada and

Toyama presented results on the automated prov-

ing of inductive theorems [2] using a formalization

di®erent from our results.

Acknowledgments

We would like to thank TOYAMA Yoshihito for

fruitful discussion,and the anonymous referees for

their useful and detailed comments.

This work is partly supported by MEXT.KAK-

ENHI#15500007 and#16300005,by Arti¯cial In-

telligence Research Promotion Foundation,and by

the 21st Century COE Program (Intelligent Media

Integration for Social Infrastructure),Nagoya Uni-

versity.

References

[1] T.Aoto,T.Yamada,Y.Toyama,Proving In-

ductive Theorems of Higher-Order Functional

Programs,In Proc.of the Forum on Informa-

tion Technology 2003 (FIT2003),Information

Technology Letters,Vol.2,pp.21{22,2003 (in

Japanese).

[2] T.Aoto,T.Yamada,Y.Toyama,Inductive The-

orems for Higher-Order Rewriting,In Proc.of

the 15th Int.Conf.on Rewriting Techniques

and Applications (RTA 2004),LNCS 3091,

pp.269{284,2004.

[3] T.Nipkow,Higher-Order Critical Pairs,In

Proc.of 6th IEEE Symp.Logic in Computer

Science,pp.342{349,IEEE Computer Society

Press (1991).

[4] F.Baader,T.Nipkow,Term Rewriting and All

That,Cambridge University Press,1998.

[5] G.Birkho®,On the Structure of Abstract Al-

gebras,Proc.of the Cambridge Philosophical

Society,31:433{454,1935.

[6] A.Bouhoula,Automated Theorem Proving by

Test Set Induction,Journal of Symbolic Com-

putation,23,pp.47{77,1997.

[7] R.S.Boyer,J.S.Moore,AComputational Logic,

Academic Press,NewYork,1979.

[8] R.Diaconescu,K.Futatsugi,CafeOBJ Report,

AMAST Series in Computing 6,World Scien-

ti¯c,1998.

13

[9] J.Goguen,T.Winkler,J.Mesegure,

K.Futatsugi,J.-P.Jouannaud,Introducing

OBJ,Technical report,SRI International,

Computer Science Laboratory,1993.

[10] J.Goguen,TheoremProving and Algebra,MIT

Press,to appear.

[11] G.Huet,J.M.Hullot,Proofs by Induction in

Equational Theories with Constructors,Jour-

nal of Computer and System Science,25,

pp.239{266,1982.

[12] Y.Inagaki,T.Sakabe,Fundamentals of Alge-

braic Speci¯cation of Abstract Data Types (1):

Many Sorted Algebras and Equational Logic,

Information Processing,Vol.25,No.01,pp.47{

53,1984 (in Japanese).

[13] Y.Inagaki,T.Sakabe,Fundamentals of Alge-

braic Speci¯cation of Abstract Data Types (2):

Abstract Data Types,Information Processing,

Vol.25,No.05,pp.491{501,1984 (in Japanese).

[14] Y.Inagaki,T.Sakabe,Fundamentals of Alge-

braic Speci¯cation of Abstract Data Types (3)

:Models of Abstract Type Constructors and

Initial Algebra Semantics,Information Pro-

cessing,Vol.25,No.07,pp.708{716,1984 (in

Japanese).

[15] D.Kapur,P.Narendran and H.Zhang,Au-

tomating Inductionless Induction Using Test

Set,Journal of Symbolic Computation,Vol.11,

pp.83{111,1991.

[16] H.Koike,Y.Toyama,Comparison between In-

ductionless Induction and Rewriting Induc-

tion,JSSST Computer Software Vol.17,No.6,

pp.1{12,2000 (in Japanese).

[17] K.Kusakari,On Proving Termination of Term

Rewriting Systems with Higher-Order Vari-

ables,IPSJ Transactions on Programming,

Vol.42,No.SIG 7 (PRO 11),pp.35{45,2001.

[18] K.Kusakari,Higher-Order Path Orders based

on Computability,IEICE Transactions on

Information and Systems,Vol.E87-D,No.2,

pp.352{359,2004.

[19] K.Meinke,Proof Theory of Higher-Order

Equations:Conservativity,Normal Forms and

TermRewriting,Journal of Computer and Sys-

tem Sciences,Vol.67,pp.127{173,2003.

[20] B.MÄoller,A.Tarlecki and M.Wirsing,Algebraic

Speci¯cations of Reachable Higher-Order Al-

gebras,Proc.of the 5th Workshop on Speci-

¯cation of Abstract Data Types,LNCS 332,

pp.154{169,1987.

[21] D.R.Musser,On Proving Induction Properties

of Abstract Data Types,Proc.of the 7th ACM

Symp.Principles of Programming Languages,

pp.154{162,1980.

[22] U.S.Reddy,Term Rewriting Induction,Proc.

of the 10th International Conference on Au-

tomated Deduction,LNAI 449 (CADE 90),

pp.162{177,1990.

[23] Y.Toyama,How to Prove Equivalence of Term

Rewriting Systems Without Induction,The-

oretical Computer Science 90(2),pp.369{390,

1991.

[24] M.Wirsing,Algebraic Speci¯cation,Formal

Models and Semantics,vol.B of Handbook

of Theoretical Computer Science,chapter 13,

pp.676{788,Elseview Science,1990.

14

## Comments 0

Log in to post a comment