\[\begin{split}\newcommand{\alors}{\textsf{then}} \newcommand{\alter}{\textsf{alter}} \newcommand{\as}{\kw{as}} \newcommand{\Assum}[3]{\kw{Assum}(#1)(#2:#3)} \newcommand{\bool}{\textsf{bool}} \newcommand{\case}{\kw{case}} \newcommand{\conc}{\textsf{conc}} \newcommand{\cons}{\textsf{cons}} \newcommand{\consf}{\textsf{consf}} \newcommand{\conshl}{\textsf{cons\_hl}} \newcommand{\Def}[4]{\kw{Def}(#1)(#2:=#3:#4)} \newcommand{\emptyf}{\textsf{emptyf}} \newcommand{\End}{\kw{End}} \newcommand{\kwend}{\kw{end}} \newcommand{\EqSt}{\textsf{EqSt}} \newcommand{\even}{\textsf{even}} \newcommand{\evenO}{\textsf{even}_\textsf{O}} \newcommand{\evenS}{\textsf{even}_\textsf{S}} \newcommand{\false}{\textsf{false}} \newcommand{\filter}{\textsf{filter}} \newcommand{\Fix}{\kw{Fix}} \newcommand{\fix}{\kw{fix}} \newcommand{\for}{\textsf{for}} \newcommand{\forest}{\textsf{forest}} \newcommand{\from}{\textsf{from}} \newcommand{\Functor}{\kw{Functor}} \newcommand{\haslength}{\textsf{has\_length}} \newcommand{\hd}{\textsf{hd}} \newcommand{\ident}{\textsf{ident}} \newcommand{\In}{\kw{in}} \newcommand{\Ind}[4]{\kw{Ind}[#2](#3:=#4)} \newcommand{\ind}[3]{\kw{Ind}~[#1]\left(#2\mathrm{~:=~}#3\right)} \newcommand{\Indp}[5]{\kw{Ind}_{#5}(#1)[#2](#3:=#4)} \newcommand{\Indpstr}[6]{\kw{Ind}_{#5}(#1)[#2](#3:=#4)/{#6}} \newcommand{\injective}{\kw{injective}} \newcommand{\kw}[1]{\textsf{#1}} \newcommand{\lb}{\lambda} \newcommand{\length}{\textsf{length}} \newcommand{\letin}[3]{\kw{let}~#1:=#2~\kw{in}~#3} \newcommand{\List}{\textsf{list}} \newcommand{\lra}{\longrightarrow} \newcommand{\Match}{\kw{match}} \newcommand{\Mod}[3]{{\kw{Mod}}({#1}:{#2}\,\zeroone{:={#3}})} \newcommand{\ModA}[2]{{\kw{ModA}}({#1}=={#2})} \newcommand{\ModS}[2]{{\kw{Mod}}({#1}:{#2})} \newcommand{\ModType}[2]{{\kw{ModType}}({#1}:={#2})} \newcommand{\mto}{.\;} \newcommand{\Nat}{\mathbb{N}} \newcommand{\nat}{\textsf{nat}} \newcommand{\Nil}{\textsf{nil}} \newcommand{\nilhl}{\textsf{nil\_hl}} \newcommand{\nO}{\textsf{O}} \newcommand{\node}{\textsf{node}} \newcommand{\nS}{\textsf{S}} \newcommand{\odd}{\textsf{odd}} \newcommand{\oddS}{\textsf{odd}_\textsf{S}} \newcommand{\ovl}[1]{\overline{#1}} \newcommand{\Pair}{\textsf{pair}} \newcommand{\plus}{\mathsf{plus}} \newcommand{\Prod}{\textsf{prod}} \newcommand{\Prop}{\textsf{Prop}} \newcommand{\return}{\kw{return}} \newcommand{\Set}{\textsf{Set}} \newcommand{\si}{\textsf{if}} \newcommand{\sinon}{\textsf{else}} \newcommand{\Sort}{\mathcal{S}} \newcommand{\Str}{\textsf{Stream}} \newcommand{\Struct}{\kw{Struct}} \newcommand{\subst}[3]{#1\{#2/#3\}} \newcommand{\tl}{\textsf{tl}} \newcommand{\tree}{\textsf{tree}} \newcommand{\trii}{\triangleright_\iota} \newcommand{\true}{\textsf{true}} \newcommand{\Type}{\textsf{Type}} \newcommand{\unfold}{\textsf{unfold}} \newcommand{\WEV}[3]{\mbox{$#1[] \vdash #2 \lra #3$}} \newcommand{\WEVT}[3]{\mbox{$#1[] \vdash #2 \lra$}\\ \mbox{$ #3$}} \newcommand{\WF}[2]{{\mathcal{W\!F}}(#1)[#2]} \newcommand{\WFE}[1]{\WF{E}{#1}} \newcommand{\WFT}[2]{#1[] \vdash {\mathcal{W\!F}}(#2)} \newcommand{\WFTWOLINES}[2]{{\mathcal{W\!F}}\begin{array}{l}(#1)\\\mbox{}[{#2}]\end{array}} \newcommand{\with}{\kw{with}} \newcommand{\WS}[3]{#1[] \vdash #2 <: #3} \newcommand{\WSE}[2]{\WS{E}{#1}{#2}} \newcommand{\WT}[4]{#1[#2] \vdash #3 : #4} \newcommand{\WTE}[3]{\WT{E}{#1}{#2}{#3}} \newcommand{\WTEG}[2]{\WTE{\Gamma}{#1}{#2}} \newcommand{\WTM}[3]{\WT{#1}{}{#2}{#3}} \newcommand{\zeroone}[1]{[{#1}]} \newcommand{\zeros}{\textsf{zeros}} \end{split}\]

Detailed examples of tactics

This chapter presents detailed examples of certain tactics, to illustrate their behavior.

dependent induction

The tactics dependent induction and dependent destruction are another solution for inverting inductive predicate instances and potentially doing induction at the same time. It is based on the BasicElim tactic of Conor McBride which works by abstracting each argument of an inductive instance by a variable and constraining it by equalities afterwards. This way, the usual induction and destruct tactics can be applied to the abstracted instance and after simplification of the equalities we get the expected goals.

The abstracting tactic is called generalize_eqs and it takes as argument a hypothesis to generalize. It uses the JMeq datatype defined in Coq.Logic.JMeq, hence we need to require it before. For example, revisiting the first example of the inversion documentation:

Require Import Coq.Logic.JMeq.
Inductive Le : nat -> nat -> Set :=      | LeO : forall n:nat, Le 0 n      | LeS : forall n m:nat, Le n m -> Le (S n) (S m).
Le is defined Le_rect is defined Le_ind is defined Le_rec is defined
Variable P : nat -> nat -> Prop.
Toplevel input, characters 0-32: > Variable P : nat -> nat -> Prop. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Warning: P is declared as a local axiom [local-declaration,scope] P is declared
Goal forall n m:nat, Le (S n) m -> P n m.
1 subgoal ============================ forall n m : nat, Le (S n) m -> P n m
intros n m H.
1 subgoal n, m : nat H : Le (S n) m ============================ P n m
generalize_eqs H.
1 subgoal n, m, gen_x : nat H : Le gen_x m ============================ gen_x = S n -> P n m

The index S n gets abstracted by a variable here, but a corresponding equality is added under the abstract instance so that no information is actually lost. The goal is now almost amenable to do induction or case analysis. One should indeed first move n into the goal to strengthen it before doing induction, or n will be fixed in the inductive hypotheses (this does not matter for case analysis). As a rule of thumb, all the variables that appear inside constructors in the indices of the hypothesis should be generalized. This is exactly what the generalize_eqs_vars variant does:

generalize_eqs_vars H.
induction H.
2 subgoals n, n0 : nat ============================ 0 = S n -> P n n0 subgoal 2 is: S n0 = S n -> P n (S m)

As the hypothesis itself did not appear in the goal, we did not need to use an heterogeneous equality to relate the new hypothesis to the old one (which just disappeared here). However, the tactic works just as well in this case, e.g.:

Abort.
Variable Q : forall (n m : nat), Le n m -> Prop.
Toplevel input, characters 0-48: > Variable Q : forall (n m : nat), Le n m -> Prop. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Warning: Q is declared as a local axiom [local-declaration,scope] Q is declared
Goal forall n m (p : Le (S n) m), Q (S n) m p.
1 subgoal ============================ forall (n m : nat) (p : Le (S n) m), Q (S n) m p
intros n m p.
1 subgoal n, m : nat p : Le (S n) m ============================ Q (S n) m p
generalize_eqs_vars p.
1 subgoal m, gen_x : nat p : Le gen_x m ============================ forall (n : nat) (p0 : Le (S n) m), gen_x = S n -> JMeq p p0 -> Q (S n) m p0

One drawback of this approach is that in the branches one will have to substitute the equalities back into the instance to get the right assumptions. Sometimes injection of constructors will also be needed to recover the needed equalities. Also, some subgoals should be directly solved because of inconsistent contexts arising from the constraints on indexes. The nice thing is that we can make a tactic based on discriminate, injection and variants of substitution to automatically do such simplifications (which may involve the axiom K). This is what the simplify_dep_elim tactic from Coq.Program.Equality does. For example, we might simplify the previous goals considerably:

Require Import Coq.Program.Equality.
induction p ; simplify_dep_elim.
1 subgoal n, m : nat p : Le n m IHp : forall (n0 : nat) (p0 : Le (S n0) m), n = S n0 -> p ~= p0 -> Q (S n0) m p0 ============================ Q (S n) (S m) (LeS n m p)

The higher-order tactic do_depind defined in Coq.Program.Equality takes a tactic and combines the building blocks we have seen with it: generalizing by equalities calling the given tactic with the generalized induction hypothesis as argument and cleaning the subgoals with respect to equalities. Its most important instantiations are dependent induction and dependent destruction that do induction or simply case analysis on the generalized hypothesis. For example we can redo what we’ve done manually with dependent destruction:

Abort.
Lemma ex : forall n m:nat, Le (S n) m -> P n m.
1 subgoal ============================ forall n m : nat, Le (S n) m -> P n m
intros n m H.
1 subgoal n, m : nat H : Le (S n) m ============================ P n m
dependent destruction H.
1 subgoal n, m : nat H : Le n m ============================ P n (S m)

This gives essentially the same result as inversion. Now if the destructed hypothesis actually appeared in the goal, the tactic would still be able to invert it, contrary to dependent inversion. Consider the following example on vectors:

Abort.
Set Implicit Arguments.
Variable A : Set.
Toplevel input, characters 0-17: > Variable A : Set. > ^^^^^^^^^^^^^^^^^ Warning: A is declared as a local axiom [local-declaration,scope] A is declared
Inductive vector : nat -> Type :=          | vnil : vector 0          | vcons : A -> forall n, vector n -> vector (S n).
vector is defined vector_rect is defined vector_ind is defined vector_rec is defined
Goal forall n, forall v : vector (S n),          exists v' : vector n, exists a : A, v = vcons a v'.
1 subgoal ============================ forall (n : nat) (v : vector (S n)), exists (v' : vector n) (a : A), v = vcons a v'
intros n v.
1 subgoal n : nat v : vector (S n) ============================ exists (v' : vector n) (a : A), v = vcons a v'
dependent destruction v.
1 subgoal n : nat a : A v : vector n ============================ exists (v' : vector n) (a0 : A), vcons a v = vcons a0 v'

In this case, the v variable can be replaced in the goal by the generalized hypothesis only when it has a type of the form vector (S n), that is only in the second case of the destruct. The first one is dismissed because S n <> 0.

A larger example

Let’s see how the technique works with induction on inductive predicates on a real example. We will develop an example application to the theory of simply-typed lambda-calculus formalized in a dependently-typed style:

Inductive type : Type :=          | base : type          | arrow : type -> type -> type.
type is defined type_rect is defined type_ind is defined type_rec is defined
Notation " t --> t' " := (arrow t t') (at level 20, t' at next level).
Inductive ctx : Type :=          | empty : ctx          | snoc : ctx -> type -> ctx.
ctx is defined ctx_rect is defined ctx_ind is defined ctx_rec is defined
Notation " G , tau " := (snoc G tau) (at level 20, tau at next level).
Fixpoint conc (G D : ctx) : ctx :=          match D with          | empty => G          | snoc D' x => snoc (conc G D') x          end.
conc is defined conc is recursively defined (decreasing on 2nd argument)
Notation " G ; D " := (conc G D) (at level 20).
Inductive term : ctx -> type -> Type :=          | ax : forall G tau, term (G, tau) tau          | weak : forall G tau,                     term G tau -> forall tau', term (G, tau') tau          | abs : forall G tau tau',                    term (G , tau) tau' -> term G (tau --> tau')          | app : forall G tau tau',                    term G (tau --> tau') -> term G tau -> term G tau'.
term is defined term_rect is defined term_ind is defined term_rec is defined

We have defined types and contexts which are snoc-lists of types. We also have a conc operation that concatenates two contexts. The term datatype represents in fact the possible typing derivations of the calculus, which are isomorphic to the well-typed terms, hence the name. A term is either an application of:

  • the axiom rule to type a reference to the first variable in a context
  • the weakening rule to type an object in a larger context
  • the abstraction or lambda rule to type a function
  • the application to type an application of a function to an argument

Once we have this datatype we want to do proofs on it, like weakening:

Lemma weakening : forall G D tau, term (G ; D) tau ->                   forall tau', term (G , tau' ; D) tau.
1 subgoal ============================ forall (G D : ctx) (tau : type), term (G; D) tau -> forall tau' : type, term ((G, tau'); D) tau
Abort.

The problem here is that we can’t just use induction on the typing derivation because it will forget about the G ; D constraint appearing in the instance. A solution would be to rewrite the goal as:

Lemma weakening' : forall G' tau, term G' tau ->                    forall G D, (G ; D) = G' ->                    forall tau', term (G, tau' ; D) tau.
1 subgoal ============================ forall (G' : ctx) (tau : type), term G' tau -> forall G D : ctx, G; D = G' -> forall tau' : type, term ((G, tau'); D) tau
Abort.

With this proper separation of the index from the instance and the right induction loading (putting G and D after the inducted-on hypothesis), the proof will go through, but it is a very tedious process. One is also forced to make a wrapper lemma to get back the more natural statement. The dependent induction tactic alleviates this trouble by doing all of this plumbing of generalizing and substituting back automatically. Indeed we can simply write:

Require Import Coq.Program.Tactics.
Require Import Coq.Program.Equality.
Lemma weakening : forall G D tau, term (G ; D) tau ->                   forall tau', term (G , tau' ; D) tau.
1 subgoal ============================ forall (G D : ctx) (tau : type), term (G; D) tau -> forall tau' : type, term ((G, tau'); D) tau
Proof with simpl in * ; simpl_depind ; auto.
intros G D tau H.
1 subgoal G, D : ctx tau : type H : term (G; D) tau ============================ forall tau' : type, term ((G, tau'); D) tau
dependent induction H generalizing G D ; intros.
4 subgoals G0 : ctx tau : type G, D : ctx x : G0, tau = G; D tau' : type ============================ term ((G, tau'); D) tau subgoal 2 is: term ((G, tau'0); D) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'

This call to dependent induction has an additional arguments which is a list of variables appearing in the instance that should be generalized in the goal, so that they can vary in the induction hypotheses. By default, all variables appearing inside constructors (except in a parameter position) of the instantiated hypothesis will be generalized automatically but one can always give the list explicitly.

Show.
4 subgoals G0 : ctx tau : type G, D : ctx x : G0, tau = G; D tau' : type ============================ term ((G, tau'); D) tau subgoal 2 is: term ((G, tau'0); D) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'

The simpl_depind tactic includes an automatic tactic that tries to simplify equalities appearing at the beginning of induction hypotheses, generally using trivial applications of reflexivity. In cases where the equality is not between constructor forms though, one must help the automation by giving some arguments, using the specialize tactic for example.

destruct D... apply weak; apply ax.
5 subgoals G0 : ctx tau, tau' : type ============================ term ((G0, tau), tau') tau subgoal 2 is: term (((G, tau'); D), t) t subgoal 3 is: term ((G, tau'0); D) tau subgoal 4 is: term ((G, tau'0); D) (tau --> tau') subgoal 5 is: term ((G, tau'0); D) tau' 4 subgoals G, D : ctx t, tau' : type ============================ term (((G, tau'); D), t) t subgoal 2 is: term ((G, tau'0); D) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'
apply ax.
3 subgoals G0 : ctx tau : type H : term G0 tau tau' : type IHterm : forall G D : ctx, G0 = G; D -> forall tau' : type, term ((G, tau'); D) tau G, D : ctx x : G0, tau' = G; D tau'0 : type ============================ term ((G, tau'0); D) tau subgoal 2 is: term ((G, tau'0); D) (tau --> tau') subgoal 3 is: term ((G, tau'0); D) tau'
destruct D...
4 subgoals G0 : ctx tau : type H : term G0 tau tau' : type IHterm : forall G D : ctx, G0 = G; D -> forall tau' : type, term ((G, tau'); D) tau tau'0 : type ============================ term ((G0, tau'), tau'0) tau subgoal 2 is: term (((G, tau'0); D), t) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'
Show.
4 subgoals G0 : ctx tau : type H : term G0 tau tau' : type IHterm : forall G D : ctx, G0 = G; D -> forall tau' : type, term ((G, tau'); D) tau tau'0 : type ============================ term ((G0, tau'), tau'0) tau subgoal 2 is: term (((G, tau'0); D), t) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'
specialize (IHterm G0 empty eq_refl).
4 subgoals G0 : ctx tau : type H : term G0 tau tau' : type IHterm : forall tau' : type, term ((G0, tau'); empty) tau tau'0 : type ============================ term ((G0, tau'), tau'0) tau subgoal 2 is: term (((G, tau'0); D), t) tau subgoal 3 is: term ((G, tau'0); D) (tau --> tau') subgoal 4 is: term ((G, tau'0); D) tau'

Once the induction hypothesis has been narrowed to the right equality, it can be used directly.

apply weak, IHterm.
3 subgoals tau : type G, D : ctx IHterm : forall G0 D0 : ctx, G; D = G0; D0 -> forall tau' : type, term ((G0, tau'); D0) tau H : term (G; D) tau t, tau'0 : type ============================ term (((G, tau'0); D), t) tau subgoal 2 is: term ((G, tau'0); D) (tau --> tau') subgoal 3 is: term ((G, tau'0); D) tau'

Now concluding this subgoal is easy.

constructor; apply IHterm; reflexivity.
2 subgoals G, D : ctx tau, tau' : type H : term ((G; D), tau) tau' IHterm : forall G0 D0 : ctx, (G; D), tau = G0; D0 -> forall tau'0 : type, term ((G0, tau'0); D0) tau' tau'0 : type ============================ term ((G, tau'0); D) (tau --> tau') subgoal 2 is: term ((G, tau'0); D) tau'

See also

The induction, case, and inversion tactics.

autorewrite

Here are two examples of autorewrite use. The first one ( Ackermann function) shows actually a quite basic use where there is no conditional rewriting. The second one ( Mac Carthy function) involves conditional rewritings and shows how to deal with them using the optional tactic of the Hint Rewrite command.

Example: Ackermann function

Require Import Arith.
[Loading ML file quote_plugin.cmxs ... done] [Loading ML file newring_plugin.cmxs ... done]
Variable Ack : nat -> nat -> nat.
Toplevel input, characters 0-33: > Variable Ack : nat -> nat -> nat. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Warning: Ack is declared as a local axiom [local-declaration,scope] Ack is declared
Axiom Ack0 : forall m:nat, Ack 0 m = S m.
Ack0 is declared
Axiom Ack1 : forall n:nat, Ack (S n) 0 = Ack n 1.
Ack1 is declared
Axiom Ack2 : forall n m:nat, Ack (S n) (S m) = Ack n (Ack (S n) m).
Ack2 is declared
Hint Rewrite Ack0 Ack1 Ack2 : base0.
Lemma ResAck0 : Ack 3 2 = 29.
1 subgoal ============================ Ack 3 2 = 29
autorewrite with base0 using try reflexivity.
No more subgoals.

Example: MacCarthy function

Require Import Omega.
[Loading ML file omega_plugin.cmxs ... done]
Variable g : nat -> nat -> nat.
Toplevel input, characters 0-31: > Variable g : nat -> nat -> nat. > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Warning: g is declared as a local axiom [local-declaration,scope] g is declared
Axiom g0 : forall m:nat, g 0 m = m.
g0 is declared
Axiom g1 : forall n m:nat, (n > 0) -> (m > 100) -> g n m = g (pred n) (m - 10).
g1 is declared
Axiom g2 : forall n m:nat, (n > 0) -> (m <= 100) -> g n m = g (S n) (m + 11).
g2 is declared
Hint Rewrite g0 g1 g2 using omega : base1.
Lemma Resg0 : g 1 110 = 100.
1 subgoal ============================ g 1 110 = 100
Show.
1 subgoal ============================ g 1 110 = 100
autorewrite with base1 using reflexivity || simpl.
No more subgoals.
Qed.
Resg0 is defined
Lemma Resg1 : g 1 95 = 91.
1 subgoal ============================ g 1 95 = 91
autorewrite with base1 using reflexivity || simpl.
No more subgoals.
Qed.
Resg1 is defined

quote

The tactic quote allows using Barendregt’s so-called 2-level approach without writing any ML code. Suppose you have a language L of 'abstract terms' and a type A of 'concrete terms' and a function f : L -> A. If L is a simple inductive datatype and f a simple fixpoint, quote f will replace the head of current goal by a convertible term of the form (f t). L must have a constructor of type: A -> L.

Here is an example:

Require Import Quote.
Parameters A B C : Prop.
A is declared B is declared C is declared
Inductive formula : Type :=          | f_and : formula -> formula -> formula          | f_or : formula -> formula -> formula          | f_not : formula -> formula          | f_true : formula          | f_const : Prop -> formula .
formula is defined formula_rect is defined formula_ind is defined formula_rec is defined
Fixpoint interp_f (f:formula) : Prop :=          match f with          | f_and f1 f2 => interp_f f1 /\ interp_f f2          | f_or f1 f2 => interp_f f1 \/ interp_f f2          | f_not f1 => ~ interp_f f1          | f_true => True          | f_const c => c          end.
interp_f is defined interp_f is recursively defined (decreasing on 1st argument)
Goal A /\ (A \/ True) /\ ~ B /\ (A <-> A).
1 subgoal ============================ A /\ (A \/ True) /\ ~ B /\ (A <-> A)
quote interp_f.
1 subgoal ============================ interp_f (f_and (f_const A) (f_and (f_or (f_const A) f_true) (f_and (f_not (f_const B)) (f_const (A <-> A)))))

The algorithm to perform this inversion is: try to match the term with right-hand sides expression of f. If there is a match, apply the corresponding left-hand side and call yourself recursively on sub- terms. If there is no match, we are at a leaf: return the corresponding constructor (here f_const) applied to the term.

When quote is not able to perform inversion properly, it will error out with quote: not a simple fixpoint.

Introducing variables map

The normal use of quote is to make proofs by reflection: one defines a function simplify : formula -> formula and proves a theorem simplify_ok: (f:formula)(interp_f (simplify f)) -> (interp_f f). Then, one can simplify formulas by doing:

quote interp_f.
1 subgoal ============================ interp_f (f_const (interp_f (f_and (f_const A) (f_and (f_or (f_const A) f_true) (f_and (f_not (f_const B)) (f_const (A <-> A)))))))
apply simplify_ok.
Toplevel input, characters 6-17: > apply simplify_ok. > ^^^^^^^^^^^ Error: The reference simplify_ok was not found in the current environment.
compute.
1 subgoal ============================ A /\ (A \/ True) /\ (B -> False) /\ (A -> A) /\ (A -> A)

But there is a problem with leafs: in the example above one cannot write a function that implements, for example, the logical simplifications \(A \wedge A \rightarrow A\) or \(A \wedge \lnot A \rightarrow \mathrm{False}\). This is because Prop is impredicative.

It is better to use that type of formulas:

Require Import Quote.
Parameters A B C : Prop.
A is declared B is declared C is declared
Inductive formula : Set :=          | f_and : formula -> formula -> formula          | f_or : formula -> formula -> formula          | f_not : formula -> formula          | f_true : formula          | f_atom : index -> formula.
formula is defined formula_rect is defined formula_ind is defined formula_rec is defined

index is defined in module Quote. Equality on that type is decidable so we are able to simplify \(A \wedge A\) into \(A\) at the abstract level.

When there are variables, there are bindings, and quote also provides a type (varmap A) of bindings from index to any set A, and a function varmap_find to search in such maps. The interpretation function also has another argument, a variables map:

Fixpoint interp_f (vm:varmap Prop) (f:formula) {struct f} : Prop :=          match f with          | f_and f1 f2 => interp_f vm f1 /\ interp_f vm f2          | f_or f1 f2 => interp_f vm f1 \/ interp_f vm f2          | f_not f1 => ~ interp_f vm f1          | f_true => True          | f_atom i => varmap_find True i vm          end.
interp_f is defined interp_f is recursively defined (decreasing on 2nd argument)

quote handles this second case properly:

Goal A /\ (B \/ A) /\ (A \/ ~ B).
1 subgoal ============================ A /\ (B \/ A) /\ (A \/ ~ B)
quote interp_f.
1 subgoal ============================ interp_f (Node_vm B (Node_vm A (Empty_vm Prop) (Empty_vm Prop)) (Empty_vm Prop)) (f_and (f_atom (Left_idx End_idx)) (f_and (f_or (f_atom End_idx) (f_atom (Left_idx End_idx))) (f_or (f_atom (Left_idx End_idx)) (f_not (f_atom End_idx)))))

It builds vm and t such that (f vm t) is convertible with the conclusion of current goal.

Combining variables and constants

One can have both variables and constants in abstracts terms; for example, this is the case for the ring tactic. Then one must provide to quote a list of constructors of constants. For example, if the list is [O S] then closed natural numbers will be considered as constants and other terms as variables.

Require Import Quote.
Parameters A B C : Prop.
A is declared B is declared C is declared
Inductive formula : Type :=          | f_and : formula -> formula -> formula          | f_or : formula -> formula -> formula          | f_not : formula -> formula          | f_true : formula          | f_const : Prop -> formula          | f_atom : index -> formula.
formula is defined formula_rect is defined formula_ind is defined formula_rec is defined
Fixpoint interp_f (vm:varmap Prop) (f:formula) {struct f} : Prop :=          match f with          | f_and f1 f2 => interp_f vm f1 /\ interp_f vm f2          | f_or f1 f2 => interp_f vm f1 \/ interp_f vm f2          | f_not f1 => ~ interp_f vm f1          | f_true => True          | f_const c => c          | f_atom i => varmap_find True i vm          end.
interp_f is defined interp_f is recursively defined (decreasing on 2nd argument)
Goal A /\ (A \/ True) /\ ~ B /\ (C <-> C).
1 subgoal ============================ A /\ (A \/ True) /\ ~ B /\ (C <-> C)
quote interp_f [ A B ].
1 subgoal ============================ interp_f (Node_vm (C <-> C) (Empty_vm Prop) (Empty_vm Prop)) (f_and (f_const A) (f_and (f_or (f_const A) f_true) (f_and (f_not (f_const B)) (f_atom End_idx))))
Undo.
1 subgoal ============================ A /\ (A \/ True) /\ ~ B /\ (C <-> C)
quote interp_f [ B C iff ].
1 subgoal ============================ interp_f (Node_vm A (Empty_vm Prop) (Empty_vm Prop)) (f_and (f_atom End_idx) (f_and (f_or (f_atom End_idx) f_true) (f_and (f_not (f_const B)) (f_const (C <-> C)))))

Warning

Since functional inversion is undecidable in the general case, don’t expect miracles from it!

Variant quote ident in term using tactic

tactic must be a functional tactic (starting with fun x =>) and will be called with the quoted version of term according to ident.

Variant quote ident [ident+] in term using tactic

Same as above, but will use the additional ident list to chose which subterms are constants (see above).

See also

Comments from the source file plugins/quote/quote.ml

See also

The ring tactic.

Using the tactic language

About the cardinality of the set of natural numbers

The first example which shows how to use pattern matching over the proof context is a proof of the fact that natural numbers have more than two elements. This can be done as follows:

Lemma card_nat :   ~ exists x : nat, exists y : nat, forall z:nat, x = z \/ y = z.
1 subgoal ============================ ~ (exists x y : nat, forall z : nat, x = z \/ y = z)
Proof.
red; intros (x, (y, Hy)).
1 subgoal x, y : nat Hy : forall z : nat, x = z \/ y = z ============================ False
elim (Hy 0); elim (Hy 1); elim (Hy 2); intros; match goal with     | _ : ?a = ?b, _ : ?a = ?c |- _ =>         cut (b = c); [ discriminate | transitivity a; auto ] end.
No more subgoals.
Qed.
card_nat is defined

We can notice that all the (very similar) cases coming from the three eliminations (with three distinct natural numbers) are successfully solved by a match goal structure and, in particular, with only one pattern (use of non-linear matching).

Permutations of lists

A more complex example is the problem of permutations of lists. The aim is to show that a list is a permutation of another list.

Section Sort.
Variable A : Set.
A is declared
Inductive perm : list A -> list A -> Prop :=     | perm_refl : forall l, perm l l     | perm_cons : forall a l0 l1, perm l0 l1 -> perm (a :: l0) (a :: l1)     | perm_append : forall a l, perm (a :: l) (l ++ a :: nil)     | perm_trans : forall l0 l1 l2, perm l0 l1 -> perm l1 l2 -> perm l0 l2.
perm is defined perm_ind is defined
End Sort.

First, we define the permutation predicate as shown above.

Require Import List.
Ltac perm_aux n := match goal with     | |- (perm _ ?l ?l) => apply perm_refl     | |- (perm _ (?a :: ?l1) (?a :: ?l2)) =>         let newn := eval compute in (length l1) in             (apply perm_cons; perm_aux newn)     | |- (perm ?A (?a :: ?l1) ?l2) =>         match eval compute in n with             | 1 => fail             | _ =>                 let l1' := constr:(l1 ++ a :: nil) in                     (apply (perm_trans A (a :: l1) l1' l2);                     [ apply perm_append | compute; perm_aux (pred n) ])         end end.
perm_aux is defined

Next we define an auxiliary tactic perm_aux which takes an argument used to control the recursion depth. This tactic behaves as follows. If the lists are identical (i.e. convertible), it concludes. Otherwise, if the lists have identical heads, it proceeds to look at their tails. Finally, if the lists have different heads, it rotates the first list by putting its head at the end if the new head hasn't been the head previously. To check this, we keep track of the number of performed rotations using the argument n. We do this by decrementing n each time we perform a rotation. It works because for a list of length n we can make exactly n - 1 rotations to generate at most n distinct lists. Notice that we use the natural numbers of Coq for the rotation counter. From Syntax we know that it is possible to use the usual natural numbers, but they are only used as arguments for primitive tactics and they cannot be handled, so, in particular, we cannot make computations with them. Thus the natural choice is to use Coq data structures so that Coq makes the computations (reductions) by eval compute in and we can get the terms back by match.

Ltac solve_perm := match goal with     | |- (perm _ ?l1 ?l2) =>         match eval compute in (length l1 = length l2) with             | (?n = ?n) => perm_aux n         end end.
solve_perm is defined

The main tactic is solve_perm. It computes the lengths of the two lists and uses them as arguments to call perm_aux if the lengths are equal (if they aren't, the lists cannot be permutations of each other). Using this tactic we can now prove lemmas as follows:

Lemma solve_perm_ex1 :   perm nat (1 :: 2 :: 3 :: nil) (3 :: 2 :: 1 :: nil).
1 subgoal ============================ perm nat (1 :: 2 :: 3 :: nil) (3 :: 2 :: 1 :: nil)
Proof.
solve_perm.
No more subgoals.
Qed.
solve_perm_ex1 is defined
Lemma solve_perm_ex2 :   perm nat     (0 :: 1 :: 2 :: 3 :: 4 :: 5 :: 6 :: 7 :: 8 :: 9 :: nil)       (0 :: 2 :: 4 :: 6 :: 8 :: 9 :: 7 :: 5 :: 3 :: 1 :: nil).
1 subgoal ============================ perm nat (0 :: 1 :: 2 :: 3 :: 4 :: 5 :: 6 :: 7 :: 8 :: 9 :: nil) (0 :: 2 :: 4 :: 6 :: 8 :: 9 :: 7 :: 5 :: 3 :: 1 :: nil)
Proof.
solve_perm.
No more subgoals.
Qed.
solve_perm_ex2 is defined

Deciding intuitionistic propositional logic

Pattern matching on goals allows a powerful backtracking when returning tactic values. An interesting application is the problem of deciding intuitionistic propositional logic. Considering the contraction-free sequent calculi LJT* of Roy Dyckhoff [Dyc92], it is quite natural to code such a tactic using the tactic language as shown below.

Ltac basic := match goal with     | |- True => trivial     | _ : False |- _ => contradiction     | _ : ?A |- ?A => assumption end.
basic is defined
Ltac simplify := repeat (intros;     match goal with         | H : ~ _ |- _ => red in H         | H : _ /\ _ |- _ =>             elim H; do 2 intro; clear H         | H : _ \/ _ |- _ =>             elim H; intro; clear H         | H : ?A /\ ?B -> ?C |- _ =>             cut (A -> B -> C);                 [ intro | intros; apply H; split; assumption ]         | H: ?A \/ ?B -> ?C |- _ =>             cut (B -> C);                 [ cut (A -> C);                     [ intros; clear H                     | intro; apply H; left; assumption ]                 | intro; apply H; right; assumption ]         | H0 : ?A -> ?B, H1 : ?A |- _ =>             cut B; [ intro; clear H0 | apply H0; assumption ]         | |- _ /\ _ => split         | |- ~ _ => red     end).
simplify is defined
Ltac my_tauto :=   simplify; basic ||   match goal with       | H : (?A -> ?B) -> ?C |- _ =>           cut (B -> C);               [ intro; cut (A -> B);                   [ intro; cut C;                       [ intro; clear H | apply H; assumption ]                   | clear H ]               | intro; apply H; intro; assumption ]; my_tauto       | H : ~ ?A -> ?B |- _ =>           cut (False -> B);               [ intro; cut (A -> False);                   [ intro; cut B;                       [ intro; clear H | apply H; assumption ]                   | clear H ]               | intro; apply H; red; intro; assumption ]; my_tauto       | |- _ \/ _ => (left; my_tauto) || (right; my_tauto)   end.
my_tauto is defined

The tactic basic tries to reason using simple rules involving truth, falsity and available assumptions. The tactic simplify applies all the reversible rules of Dyckhoff’s system. Finally, the tactic my_tauto (the main tactic to be called) simplifies with simplify, tries to conclude with basic and tries several paths using the backtracking rules (one of the four Dyckhoff’s rules for the left implication to get rid of the contraction and the right or).

Having defined my_tauto, we can prove tautologies like these:

Lemma my_tauto_ex1 :   forall A B : Prop, A /\ B -> A \/ B.
1 subgoal ============================ forall A B : Prop, A /\ B -> A \/ B
Proof.
my_tauto.
No more subgoals.
Qed.
my_tauto_ex1 is defined
Lemma my_tauto_ex2 :   forall A B : Prop, (~ ~ B -> B) -> (A -> B) -> ~ ~ A -> B.
1 subgoal ============================ forall A B : Prop, (~ ~ B -> B) -> (A -> B) -> ~ ~ A -> B
Proof.
my_tauto.
No more subgoals.
Qed.
my_tauto_ex2 is defined

Deciding type isomorphisms

A more tricky problem is to decide equalities between types modulo isomorphisms. Here, we choose to use the isomorphisms of the simply typed λ-calculus with Cartesian product and unit type (see, for example, [dC95]). The axioms of this λ-calculus are given below.

Open Scope type_scope.
Section Iso_axioms.
Variables A B C : Set.
A is declared B is declared C is declared
Axiom Com : A * B = B * A.
Com is declared
Axiom Ass : A * (B * C) = A * B * C.
Ass is declared
Axiom Cur : (A * B -> C) = (A -> B -> C).
Cur is declared
Axiom Dis : (A -> B * C) = (A -> B) * (A -> C).
Dis is declared
Axiom P_unit : A * unit = A.
P_unit is declared
Axiom AR_unit : (A -> unit) = unit.
AR_unit is declared
Axiom AL_unit : (unit -> A) = A.
AL_unit is declared
Lemma Cons : B = C -> A * B = A * C.
1 subgoal A, B, C : Set ============================ B = C -> A * B = A * C
Proof.
intro Heq; rewrite Heq; reflexivity.
No more subgoals.
Qed.
Cons is defined
End Iso_axioms.
Ltac simplify_type ty := match ty with     | ?A * ?B * ?C =>         rewrite <- (Ass A B C); try simplify_type_eq     | ?A * ?B -> ?C =>         rewrite (Cur A B C); try simplify_type_eq     | ?A -> ?B * ?C =>         rewrite (Dis A B C); try simplify_type_eq     | ?A * unit =>         rewrite (P_unit A); try simplify_type_eq     | unit * ?B =>         rewrite (Com unit B); try simplify_type_eq     | ?A -> unit =>         rewrite (AR_unit A); try simplify_type_eq     | unit -> ?B =>         rewrite (AL_unit B); try simplify_type_eq     | ?A * ?B =>         (simplify_type A; try simplify_type_eq) ||         (simplify_type B; try simplify_type_eq)     | ?A -> ?B =>         (simplify_type A; try simplify_type_eq) ||         (simplify_type B; try simplify_type_eq) end with simplify_type_eq := match goal with     | |- ?A = ?B => try simplify_type A; try simplify_type B end.
simplify_type is defined simplify_type_eq is defined
Ltac len trm := match trm with     | _ * ?B => let succ := len B in constr:(S succ)     | _ => constr:(1) end.
len is defined
Ltac assoc := repeat rewrite <- Ass.
assoc is defined
Ltac solve_type_eq n := match goal with     | |- ?A = ?A => reflexivity     | |- ?A * ?B = ?A * ?C =>         apply Cons; let newn := len B in solve_type_eq newn     | |- ?A * ?B = ?C =>         match eval compute in n with             | 1 => fail             | _ =>                 pattern (A * B) at 1; rewrite Com; assoc; solve_type_eq (pred n)         end end.
solve_type_eq is defined
Ltac compare_structure := match goal with     | |- ?A = ?B =>         let l1 := len A         with l2 := len B in             match eval compute in (l1 = l2) with                 | ?n = ?n => solve_type_eq n             end end.
compare_structure is defined
Ltac solve_iso := simplify_type_eq; compare_structure.
solve_iso is defined

The tactic to judge equalities modulo this axiomatization is shown above. The algorithm is quite simple. First types are simplified using axioms that can be oriented (this is done by simplify_type and simplify_type_eq). The normal forms are sequences of Cartesian products without Cartesian product in the left component. These normal forms are then compared modulo permutation of the components by the tactic compare_structure. If they have the same lengths, the tactic solve_type_eq attempts to prove that the types are equal. The main tactic that puts all these components together is called solve_iso.

Here are examples of what can be solved by solve_iso.

Lemma solve_iso_ex1 :   forall A B : Set, A * unit * B = B * (unit * A).
1 subgoal ============================ forall A B : Set, A * unit * B = B * (unit * A)
Proof.
intros; solve_iso.
No more subgoals.
Qed.
solve_iso_ex1 is defined
Lemma solve_iso_ex2 :   forall A B C : Set,     (A * unit -> B * (C * unit)) =     (A * unit -> (C -> unit) * C) * (unit -> A -> B).
1 subgoal ============================ forall A B C : Set, (A * unit -> B * (C * unit)) = (A * unit -> (C -> unit) * C) * (unit -> A -> B)
Proof.
intros; solve_iso.
No more subgoals.
Qed.
solve_iso_ex2 is defined