From 174948f99addaeae0455763152d19a6a9b17bf8f Mon Sep 17 00:00:00 2001
From: Tim Daly
Date: Sun, 14 Jan 2018 00:09:11 0500
Subject: [PATCH] books/bookvol13 review Hoar87
Goal: Proving Axiom Correct

books/bookvol13.pamphlet  474 ++++++++++++++++++++++++++++++
changelog  2 +
patch  446 +
src/axiomwebsite/patches.html  2 +
4 files changed, 356 insertions(+), 568 deletions()
diff git a/books/bookvol13.pamphlet b/books/bookvol13.pamphlet
index 6d56c7e..897da74 100644
 a/books/bookvol13.pamphlet
+++ b/books/bookvol13.pamphlet
@@ 128,7 +128,7 @@ develop program specification languages and formalisms for
systematically deriving programs from specifications. For constructive
mathematics to provide such a methodology, techniques are needed for
systematically extracting programs from constructive proofs. Early work
in this field includes that of Bishop and Constable\cite{Cons98}. What
+in this field includes that of Bishop and Constable \cite{Cons98}. What
distinguished MartinL\"of's '82 type theory was that the method it
suggested for program synthesis was exceptionally simple: a direct
correspondence was set up between the constructs of mathematical
@@ 139,7 +139,7 @@ suggest precisely how to construct an inhabitant of the type, which
would be a term in a functional programming language. The term that
inhabits the type corresponding to a proposition is often referred to as
the {\sl computational content} of the proposition.}\\
 Paul Bernard Jackson\cite{Jack95}
+ Paul Bernard Jackson \cite{Jack95}
\end{quote}
\begin{quote}
@@ 158,7 +158,7 @@ your writing is.}\\
{\bf Type theory is nothing short of a grand unified theory of computation
unified with mathematics so ultimately there is no difference between
math and the code.}\\
 Robert Harper\cite{Harp13}
+ Robert Harper \cite{Harp13}
\end{quote}
\begin{quote}
@@ 425,7 +425,7 @@ specification with an SMT Solver allows the ability to do things like
ask what input will give the known output, for example.
Chlipala has done both machinelevel \cite{Chli17a} and higherlevel
\cite{Chli17} formal proofs. He claims that within 10 years it will be
+ \cite{Chli17} formal proofs. He claims that within 10 years it will be
normal to have deployed computer systems with toptobottom correctness
proofs which were checked algorithmically.
@@ 477,7 +477,7 @@ In fact, this is stronger than we need since
A Euclidean function on $R$ is a function $f$ from $\mathbb{R}\\\{0\}$
to the nonnegative integers satisfying the following fundamental
divisionwithremainder property\cite{WikiED}:
+divisionwithremainder property \cite{WikiED}:
$D(a,b)$ = set of common divisors of $a$ and $b$.
@@ 488,7 +488,7 @@ $\gcd(a,b) = \max{D(a,b)}$
NonNegativeInteger inherits {\tt gcd} from {\tt Integer} up the ``add chain''
since it is a subtype of {\tt Integer}. {\tt Integer} has {\tt EuclideanDomain}
as an ancestor\cite{Book103}:
+as an ancestor \cite{Book103}:
\begin{verbatim}
(1) > getAncestors "Integer"
@@ 508,7 +508,7 @@ as an ancestor\cite{Book103}:
\end{verbatim}
From category {\tt EuclideanDomain} (EUCDOM) we find the implementation of
the Euclidean GCD algorithm\cite{Book102}:
+the Euclidean GCD algorithm \cite{Book102}:
\begin{verbatim}
gcd(x,y) == Euclidean Algorithm
x:=unitCanonical x
@@ 595,7 +595,7 @@ with the test case results of:
\section{Mathematics}
From Buchberger\cite{Buch97},
+From Buchberger \cite{Buch97},
Define ``divides''
\[ t\vert a \Longleftrightarrow \exists u (t \cdot u = a)\]
@@ 757,7 +757,8 @@ For every pair of integers $a$, $b$ where $b \ne 0$, there exist unique
integers $q,r$ such that $a = qb + r$ and $0 \le r \le \abs{b}$.
\subsection{Hoare's axioms and gcd proof}
From Hoare\cite{Hoar69}
+From Hoare \cite{Hoar69}
+
\begin{tabular}{rll}
A1 & $x+y = y+x$ & addition is commutative\\
A2 & $x\times y = y\times x$ & multiplication is commutative\\
@@ 990,10 +991,10 @@ val gcd : int > int > int =
\chapter{Temporal Logic of Actions (TLA)}
\begin{quote}
{\bf Sloppiness is easier than precision and rigor}
 Leslie Lamport\cite{Lamp14a}
+ Leslie Lamport \cite{Lamp14a}
\end{quote}
Leslie Lamport\cite{Lamp14}\cite{Lamp16} on $21^{st}$ Century Proofs.
+Leslie Lamport \cite{Lamp14} \cite{Lamp16} on $21^{st}$ Century Proofs.
A method of writing proofs is described that makes it harder to prove
things that are not true. The method, based on hierarchical
@@ 1281,7 +1282,7 @@ THEOREM Correctness == Spec => []ResultCorrect
\section{Divisibility Definition}
In Shoup\cite{Sho08} we find the divisibility definition.
+In Shoup \cite{Sho08} we find the divisibility definition.
Given the integers, $a$ and $b$
\[ a {\rm \ divides\ } b \implies az = b {\rm \ for\ some\ } z\]
@@ 1302,7 +1303,7 @@ proof: \[ a  b \implies a \le b; b  a \implies b \le a;
\chapter{COQ proof of GCD}
\section{Basics of the Calculus of Constructions}
Coquand\cite{Coqu86}\cite{Wiki17}
+Coquand \cite{Coqu86} \cite{Wiki17}
defines the Calculus of Constructions which can
be considered an extension of the CurryHoward Isomorphism. The components
are
@@ 1360,7 +1361,7 @@ which means
\subsection{Inference Rules}
In Frade\cite{Frad08} we find:
+In Frade \cite{Frad08} we find:
\[\begin{array}{lcl}
@@ 1442,7 +1443,7 @@ extensionality and proof irrelevance.
\section{Why does COQ have Prop?}
From a stackexchange post\cite{Stac17} we find the question:
+From a stackexchange post \cite{Stac17} we find the question:
"Coq has a type {\tt Prop} of proof irrelevant propositions which are discarded
during extraction. What are the reasons for having this if we use Coq only
@@ 1506,7 +1507,7 @@ then the program will only compute the lowest bit $b$. The machine cannot
tell which is the correct one, the user has to tell it what he wants.
\section{Source code of COQ GCD Proof}
This is the proof of GCD\cite{Coqu16a} in the COQ\cite{Coqu16} sources:
+This is the proof of GCD \cite{Coqu16a} in the COQ \cite{Coqu16} sources:
\begin{verbatim}
Library Coq.ZArith.Znumtheory
@@ 1865,7 +1866,7 @@ Theorem not_prime_divide:
\end{verbatim}
\chapter{LEAN proof of GCD}
This is the proof of GCD\cite{Avig14} in the LEAN\cite{Avig16} sources:
+This is the proof of GCD \cite{Avig14} in the LEAN \cite{Avig16} sources:
\begin{verbatim}
/
Copyright (c) 2014 Jeremy Avigad. All rights reserved.
@@ 2242,7 +2243,7 @@ end nat
\end{verbatim}
\chapter{Formal Pre and Postconditions}
In Boldo\cite{Bold11} we find an effort to verify floating point software
+In Boldo \cite{Bold11} we find an effort to verify floating point software
using preconditions, postconditions, and assertions. Quoting:
``These conjectures can be described formally by annotations as follows.
@@ 2273,7 +2274,7 @@ error: it states that the method error is less than $2^{24}$. Again, it
is thanks to the choice of having exact operations in the annotations that
we are able to state a property of the method error.''
In Boldo\cite{Bold07,Bold07a} we find 'search in an array' annotated:
+In Boldo \cite{Bold07,Bold07a} we find 'search in an array' annotated:
\begin{verbatim}
/*@ requires \valid_range(t,0,n1)
@ ensures
@@ 2299,7 +2300,7 @@ We need to start from a base of the existing types in Common Lisp,
eventually providing Axiom combinations or specializations.
Common Lisp has these standard type specifier symbols.
\begin{center}
{\bf Common Lisp Type Hierarchy}\cite{Pfei12}\\
+{\bf Common Lisp Type Hierarchy} \cite{Pfei12}\\
\includegraphics[scale=0.5]{ps/v13cltypehierarchy.eps}
\end{center}
@@ 2320,7 +2321,7 @@ Note that Coq's {\tt nat} domain stops at O (a symbolic 0)
as does Axiom's NNI. The Axiom interpreter will promote a
subtraction to Integer whereas Coq will not.
COQ's nat domain\cite{COQnat} is
+COQ's nat domain \cite{COQnat} is
\subsection{Library Coq.Init.Nat}
@@ 2625,7 +2626,7 @@ Definition lxor a b := bitwise xorb (max a b) a b.
\chapter{Binary Power in COQ by Casteran and Sozeau}
From Casteran and Sozeau\cite{Cast16}:
+From Casteran and Sozeau \cite{Cast16}:
\begin{verbatim}
(* About integer powers (monomorphic version) *)
@@ 2707,7 +2708,7 @@ a more generic framework.
\end{itemize}
Such a mathematical structure can be defined in Coq as a type class.
\cite{Soze08}. In the following definition, parameterized by a type $A$
+ \cite{Soze08}. In the following definition, parameterized by a type $A$
(implicit), a binary operation {\tt dot} and a neutral element {\tt unit},
three fields describe the properties that {\tt dot} and {\tt unit} must
satisfy.
@@ 3216,7 +3217,7 @@ power_of_mult 3 4 5
\chapter{Proof Tower Layer: C11 using CH$_2$O}
From Krebbers\cite{Kreb17}
+From Krebbers \cite{Kreb17}
{\bf Module example\_gcd}
@@ 3265,31 +3266,31 @@ Proof.\\
End gcd.\\
\chapter{Other Ideas to Explore}
Computerising Mathematical Text\cite{Kama15} explores various ways of
+Computerising Mathematical Text \cite{Kama15} explores various ways of
capturing mathematical reasoning.
Chlipala\cite{Chli15} gives a pragmatic approach to COQ.
+Chlipala \cite{Chli15} gives a pragmatic approach to COQ.
MedinaBulo et al.\cite{Bulo04} gives a formal verification of
+MedinaBulo et al. \cite{Bulo04} gives a formal verification of
Buchberger's algorithm using ACL2 and Common Lisp.
Th\'ery\cite{Ther01} used COQ to check an implementation of Buchberger's
+Th\'ery \cite{Ther01} used COQ to check an implementation of Buchberger's
algorithm.
Pierce\cite{Pier15} has a Software Foundations course in COQ with
+Pierce \cite{Pier15} has a Software Foundations course in COQ with
downloaded files in Pier15.tgz.
Spitters\cite{Spit11} Type Classes for Mathematics in Coq. Also see
+Spitters \cite{Spit11} Type Classes for Mathematics in Coq. Also see
\verbhttp://www.eelis.net/research/mathclasses/
Mahboubi\cite{Mahb16} Mathematical Components. This book contains a
+Mahboubi \cite{Mahb16} Mathematical Components. This book contains a
proof of the Euclidean algorithm using COQ.
Aczel\cite{Acze13} Homotopy Type Theory
+Aczel \cite{Acze13} Homotopy Type Theory
Santas\cite{Sant95} A Type System for Computer Algebra
+Santas \cite{Sant95} A Type System for Computer Algebra
Homann\cite{Homa94} algorithm schemata
+Homann \cite{Homa94} algorithm schemata
Name: gcd(?a,?b)=?g\\
Signature: ?A $\times$ ?A $\rightarrow$ ?A\\
@@ 3388,24 +3389,31 @@ $f(r)y$\\
+F4.3 & $gcd(x,y)=gcd(y,x)$ & if $x0 \land y > 0$ as a precondition
+\item P6.2 $Z=gcd(x,y)$ as a postcondition
+\item P6.3 $s^N gcd(X,Y)=gcd(x,y)$. The task of the first part is to
+make P6.3 true on termination. That is easily accomplished by just one
+multiple assignment
+\[N,Z,Y:=0,x,y\]
+which can be proven by substitution
+\[2^0 gcd(x,y) = gcd(x,y)\]
+\item P6.4 We can show that
+\[2^N Z=gcd(x,y)\]
+This would be obviously true if $N$ were already zero. If $N$ is
+nonzero, it can be made closer to zero by subtracting one. But ath
+would amek P6.4 false, and therefore useless. Fortunately, the truth
+of P6.4 can easily be restored if every subtraction of one from $N$ is
+accompanied by a doubling of $Z$. This can be proven by
+\[n>0 \land 2^N Z=gcd(x,y)\rightarrow 2^{N1}(2Z)=gcd(x,y)\]
+Since termination is obvious, we have proved the correctness of the
+loop
+\[{\bf while}~N>0~{\bf do}~N,Z:=N1,2X\]
+On termination of this loop, the value of $N$ is zero and P6.4 is
+still true. Consequently, the postcondition of the whole program has
+been established.
+\end{itemize}
+
+Having completed the first and last of the three tasks, the time has
+come to confess that the middle task is the most difficult. Its
+precondition is P6.3 and its postcondition is P6.4. The task can be
+split into four subtasks, in accordance with the following series of
+intermediate assertions.
+
+\begin{tabular}{rcl}
+P6.3 & $\land$ & (X odd $\lor$ Y odd)\\
+P6.3 & $\land$ & (Y add)\\
+P6.3 & $\land$ & (Y odd) $\land$ X=Y\\
+\end{tabular}
Jenks\cite{Jenk84b} Overview of Scratchpad.
+\subsection{Jenks \cite{Jenk84b}} Overview of Scratchpad.
Kifer\cite{Kife91} Typed Predicate Calculus giving declarative meaning to
+\subsection{Kifer \cite{Kife91}}
+
+Typed Predicate Calculus giving declarative meaning to
logic programs with type declarations and type inference.
Meshveliani\cite{Mesh16a}
+\subsection{Meshveliani \cite{Mesh16a}}
{\bf Prejudice 1}: ``Proof by contradiction is not possible in
constructive mathematics''
@@ 3486,17 +3748,29 @@ algorithm to build a proof witness for each concrete data''.
I claim: {\sl they also provide a proof in its ordinary meaning}
(this is so in Agda, and I expect, the same is with Coq).
Smolka\cite{Smol89a} details the foundations for relational logic programming
+\subsection{ \cite{Neup13}}
+
+The case study was motivated by a master's thesis at RISC
+Linz, which implemented a CA algorithm for the greatest common divisor
+of multivariate polynomials in SML \cite{Mein13}.
+
+\subsection{Smolka \cite{Smol89a}}
+
+details the foundations for relational logic programming
with polymorphically ordersorted data types.
Sutor\cite{Suto87} Type inference and coercion in Scratchpad II.
+\subsection{Sutor \cite{Suto87}}
+
+Type inference and coercion in Scratchpad II.
+
+\subsection{Wijngaarden \cite[Section 6, p95]{Wijn68}}
Wijngaarden\cite[Section 6, p95]{Wijn68}
ALGOL 68 has carefully defined rules for coercion, using dereferencing, deproceduring,
widening, rowing, uniting, and voiding to transform values to the type required
for further computation.
McAllester, D. and Arkondas, K., \cite{Mcal96}
+\subsection{McAllester, D. and Arkondas, K., \cite{Mcal96}}
+
"Primitive recursion is a well known syntactic restriction on
recursion definitions which guarantees termination. Unfortunately
many natural definitions, such as the most common definition of
@@ 3509,52 +3783,6 @@ syntactic criteria generalizes primitive recursion and handles
most of the examples given by Walthar. We call the corresponding
class of acceptable defintions ``Walther recursive''.",
In Harrison\cite[p13]{Harr98} we find

There are several examples of computer algebra results which may be
checked relatively easily:
\begin{itemize}
\item factoring polynomials (or numbers)
\item finding GCDs of polynomials (or numbers)
\item solving equations (algebraic, simultaneous, differential,...)
\item finding antiderviatives
\item finding closed forms for summations
\end{itemize}

In most cases the certificate is simply the answer. An exception is
the GCD, where a slightly more elaborate certificate is better for our
purposes. If we ask to find the GCD of $x^21$ and $x^5+1$ using the
gcd function, for example, the respons is $x+1$. How can this result
be checked? It's certainly straightforward to check that this is
{\sl a} common divisor. If we don't want to code polynomial division
ourselves in HOL, we can call the {\tt divide} function, and then
simply verify the quotient as above. But how can we prove that $x+1$
is the {\sl greatest} common divisor\footnote{The use of
``greatest'' is a misnomer: in a general ring we say that $a$ is the
GCD of $b$ and $c$ iff it is a common divisor, and any other common
divisor of $b$ and $c$ divides $a$. For example, both 2 and 2 are
GCDs of 8 and 10 over $\mathbb{Z}$.} At first sight, there is no easy
way, short of replicating something like the Euclidean algorithm
inside the logic (although that isn't really a difficult prospect).

However, a variant GCD algorithm, called {\sl gcdex} will, given
polynomials $p$ and $q$, produce not just the GCD $d$, but also two
other polynomials $r$ and $s$ such that $d=pr+qs$. (Indeed, the
coefficients in this sort of Bezout identity follow easily from the
Euclidean GCD algorithm.) For example, applied to $x^21$ and
$x^5+1$ we get the following equation:
\[(x^3x)(x^21)+1(x^5+1)=x+1\]
This again can be checked easily, and from that, the fact that $x+1$
is the {\sl greatest} common divisor follows by an easily proved
theorem, since obviously any common factor of $x^21$ and $x^5+1$
must, by the above equation, divide $x+1$ too. So here, given a
certificate slightly more elaborate than simply the answer, easy and
efficient checking is possible.

\cite{Neup13} The case study was motivated by a master's thesis at RISC
Linz, which implemented a CA algorithm for the greatest common divisor
of multivariate polynomials in SML\cite{Mein13}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\cleardoublepage
\phantomsection
diff git a/changelog b/changelog
index 7cd5b99..c2514ce 100644
 a/changelog
+++ b/changelog
@@ 1,3 +1,5 @@
+20180114 tpd src/axiomwebsite/patches.html 20180114.01.tpd.patch
+20180114 tpd books/bookvol13 review Hoar87
20180113 tpd src/axiomwebsite/patches.html 20180113.01.tpd.patch
20180113 tpd books/bookvolbib add references
20180109 tpd src/axiomwebsite/patches.html 20180109.01.tpd.patch
diff git a/patch b/patch
index 262506b..e3a6c2c 100644
 a/patch
+++ b/patch
@@ 1,448 +1,4 @@
books/bookvolbib add references
+books/bookvol13 review Hoar87
Goal: Proving Axiom Correct
\index{Coer, Claudio Sacerdoti}
\index{Tassi, Enrico}
\begin{chunk}{axiom.bib}
@inproceedings{Aspe11a,
 author = "Coer, Claudio Sacerdoti and Tassi, Enrico",
 title = {{Nonuniform Coercions via Unification Hints}},
 booktitle = "Proc. Types for Proofs and Programs",
 volume = "53",
 pages = "1926",
 year = "2011",
 abstract =
 "We introduce the notion of nonuniform coercion, which is the
 promotion of a value of one type to an enriched value of a different
 type via a nonuniform procedure. Nonuniform coercions are a
 generalization of the (uniform) coercions known in the literature and
 they arise naturally when formalizing mathematics in an higher order
 interactive theorem prover using convenient devices like canonical
 structures, type classes or unification hints. We also show how
 nonuniform coercions can be naturally implemented at the user level in
 an interactive theorem prover that allows unification hints.",
 paper = "Aspe11a.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Ricciotti, Wilmer}
\index{Coer, Claudio Sacerdoti}
\index{Tassi, Enrico}
\begin{chunk}{axiom.bib}
@article{Aspe09a,
 author = "Asperti, Andrea and Ricciotti, Wilmer and Coer, Claudio
 Sacerdoti and Tassi, Enrico",
 title = {{Hints in Unification}},
 journal = "LNCS",
 volume = "5674",
 pages = "8498",
 year = "2009",
 isbn = "9783642033582",
 abstract =
 "Several mechanisms such as Canonical Structures, Type Classes, or
 Pullbacks have been recently introduced with the aim to improve the
 power and flexibility of the type inference algorithm for interactive
 theorem provers. We claim that all these mechanisms are particular
 instances of a simpler and more general technique, just consisting in
 providing suitable hints to the unification procedure underlying type
 inference. This allows a simple, modular and not intrusive
 implementation of all the above mentioned techniques, opening at the
 same time innovative and unexpected perspectives on its possible
 applications.",
 paper = "Aspe09a.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Tassi, Enrico}
\begin{chunk}{axiom.bib}
@article{Aspe10,
 author = "Asperti, Andrea and Tassi, Enrico",
 title = {{Smart Matching}},
 journal = "LNCS",
 volume = "6167",
 pages = "263277",
 year = "2010",
 isbn = "9783642141287",
 abstract =
 "One of the most annoying aspects in the formalization of mathematics
 is the need of transforming notions to match a given, existing
 result. This kind of transformations, often based on a conspicuous
 background knowledge in the given scientific domain (mostly expressed
 in the form of equalities or isomorphisms), are usually implicit in
 the mathematical discourse, and it would be highly desirable to obtain
 a similar behavior in interactive provers. The paper describes the
 superpositionbased implementation of this feature inside the Matita
 interactive theorem prover, focusing in particular on the so called
 smart application tactic, supporting smart matching between a goal and
 a given result.",
 paper = "Aspe10.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Ricciotti, Wilmer}
\index{Coer, Claudio Sacerdoti}
\index{Tassi, Enrico}
\begin{chunk}{axiom.bib}
@inproceedings{Aspe09b,
 author = "Asperti, Andrea and Ricciotti, Wilmer and Coer, Claudio
 Sacerdoti and Tassi, Enrico",
 title = {{A New Type for Tactics}},
 booktitle = "SIGSAM PLMMS 2009",
 publisher = "ACM",
 year = "2009",
 isbn = "9781605587356",
 abstract =
 "The type of tactics in all (procedural) proof assistants is (a
 variant of) the one introduced in LCF. We discuss why this is
 inconvenient and we propose a new type for tactics that 1) allows the
 implementation of more clever tactics; 2) improves the implementation
 of declarative languages on top of procedural ones; 3) allows for
 better proof structuring; 4) improves proof automation; 5) allows
 tactics to rearrange and delay the goals to be proved (e.g. in case of
 side conditions or PVS subtyping judgements).",
 paper = "Aspe09b.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Tassi, Enrico}
\begin{chunk}{axiom.bib}
@inproceedings{Aspe07,
 author = "Asperti, Andrea and Tassi, Enrico",
 title = {{Higher Order Proof Reconstruction from
 Paramodulationbased Refutations: The Unit Equality Case}},
 booktitle = "MKM 2007",
 year = "2007",
 abstract =
 "In this paper we address the problem of reconstructing a higher
 order, checkable proof object starting from a proof trace left by a
 first order automatic proof searching procedure, in a restricted
 equational framework. The automatic procedure is based on
 superposition rules for the unit equality case. Proof transformation
 techniques aimed to improve the readability of the final proof are
 discussed.",
 paper = "Aspe07.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Coen, Claudio Sacerdoti}
\index{Tassi, Enrico}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@inproceedings{Aspe06,
 author = "Asperti, Andrea and Coen, Claudio Sacerdoti and
 Tassi, Enrico and Zacchiroli, Stefano",
 title = {{Crafting a Proof Assistant}},
 booktitle = "Proc. Types 2006: Conf. of the Types Project",
 year = "2006",
 abstract =
 "Proof assistants are complex applications whose develop ment has
 never been properly systematized or documented. This work is a
 contribution in this direction, based on our experience with the
 devel opment of Matita: a new interactive theorem prover based—as
 Coq—on the Calculus of Inductive Constructions (CIC). In particular,
 we analyze its architecture focusing on the dependencies of its
 components, how they implement the main functionalities, and their
 degree of reusability. The work is a first attempt to provide a ground
 for a more direct com parison between different systems and to
 highlight the common functionalities, not only in view of
 reusability but also to encourage a more systematic comparison of
 different softwares and architectural solutions.",
 paper = "Aspe06.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Coen, Claudio Sacerdoti}
\index{Tassi, Enrico}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@article{Aspe07a,
 author = "Asperti, Andrea and Coen, Claudio Sacerdoti and Tassi, Enrico
 and Zacchiroli, Stefano",
 title = {{User Interaction with the Matita Proof Assistant}},
 journal = "J. of Automated Reasoning",
 volume = "39",
 number = "2",
 pages = "109139",
 abstract =
 "Matita is a new, documentcentric, tacticbased interactive theorem
 prover. This paper focuses on some of the distinctive features of the
 user interaction with Matita, characterized mostly by the organization
 of the library as a searchable knowledge base, the emphasis on a
 highquality notational rendering, and the complex interplay between
 syntax, presentation, and semantics.",
 paper = "Aspe07a.pdf"
}

\end{chunk}

\index{Coen, Claudio Sacerdoti}
\index{Tassi, Enrico}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@inproceedings{Coen06,
 author = "Coen, Claudio Sacerdoti and Tassi, Enrico and
 Zacchiroli, Stefano",
 title = {{Tinycals: Step by Step Teacticals}},
 booktitle = "Proc. User Interfaces for Theorem Provers",
 year = "2006",
 pages = "125142",
 abstract =
 "Most of the stateoftheart proof assistants are based on procedural
 proof languages, scripts, and rely on LCF tacticals as the primary
 tool for tactics composition. In this paper we discuss how these
 ingredients do not interact well with user interfaces based on the
 same interaction paradigm of Proof General (the de facto standard in
 this field), identifying in the coarsegrainedness of tactical
 evaluation the key problem. We propose tinycals as an alternative to a
 subset of LCF tacticals, showing that the user does not experience the
 same problem if tacticals are evaluated in a more finegrained
 manner. We present the formal operational semantics of tinycals as
 well as their implementation in the Matita proof assistant.",
 paper = "Coen06.pdf"
}

\end{chunk}

\index{Padovani, Luca}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@inproceedings{Pado06,
 author = "Padovani, Luca and Zacchiroli, Stefano",
 title = {{From Notation to Semantics: There and Back Again}},
 booktitle = "5th Conf. on Mathematical Knowledge Management",
 year = "2006",
 abstract =
 "Mathematical notation is a structured, open, and ambiguous
 language. In order to support mathematical notation in MKM
 applications one must necessarily take into account presentational as
 well as semantic aspects. The former are required to create a
 familiar, comfortable, and usable interface to interact with. The
 latter are necessary in order to process the information
 meaningfully. In this paper we investigate a framework for dealing
 with mathematical notation in a meaningful, extensible way, and we
 show an effective instantiation of its architecture to the field of
 interactive theorem proving. The framework builds upon wellknown
 concepts and widelyused technologies and it can be easily adopted by
 other MKM applications.",
 paper = "Pado06.pdf"
}

\end{chunk}

\index{Asperti, Andrea}
\index{Guidi, Ferruccio}
\index{Coen, Claudio Sacerdoti}
\index{Tassi, Enrico}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@article{Aspe04,
 author = "Asperti, Andrea and Guidi, Ferruccio and Coen, Claudio Sacerdoti
 and Tassi, Enrico and Zacchiroli, Stefano",
 title = {{A Content Based Mathematical Search Engine: Whelp}},
 journal = "LNCS",
 volume = "3839",
 year = "2004",
 pages = "1732",
 isbn = "3540314288",
 abstract =
 "The prototype of a content based search engine for mathematical
 knowledge supporting a small set of queries requiring matching and/or
 typing operations is described. The prototype, called Whelp, exploits
 a metadata approach for indexing the information that looks far more
 flexible than traditional indexing techniques for structured
 expressions like substitution, discrimination, or context trees. The
 prototype has been instantiated to the standard library of the Coq
 proof assistant extended with many user contributions.",
 paper = "Aspe04.pdf"
}

\end{chunk}

\index{Coen, Claudio Sacerdoti}
\index{Zacchiroli, Stefano}
\begin{chunk}{axiom.bib}
@article{Coen04,
 author = "Coen, Claudio Sacerdoti and Zacchiroli, Stefano",
 title = {{Efficient Ambiguous Parsing of Mathematical Formulae}},
 journal = "LNCS",
 volume = "3119",
 pages = "347362",
 year = "2004",
 isbn = "3540230297",
 abstract =
 "Mathematical notation has the characteristic of being ambiguous:
 operators can be overloaded and information that can be deduced is
 often omitted. Mathematicians are used to this ambiguity and can
 easily disambiguate a formula making use of the context and of their
 ability to find the right interpretation.

 Software applications that have to deal with formulae usually avoid
 these issues by fixing an unambiguous input notation. This solution is
 annoying for mathematicians because of the resulting tricky syntaxes
 and becomes a show stopper to the simultaneous adoption of tools
 characterized by different input languages.

 In this paper we present an efficient algorithm suitable for ambiguous
 parsing of mathematical formulae. The only requirement of the
 algorithm is the existence of a 'validity' predicate over abstract
 syntax trees of incomplete formulae with placeholders. This
 requirement can be easily fulfilled in the applicative area of
 interactive proof assistants, and in several other areas of
 Mathematical Knowledge Management.",
 paper = "Coen04.pdf"
}

\end{chunk}

\index{Grabm\"uller, Martin}
\begin{chunk}{axiom.bib}
@misc{Grab06a,
 author = "Grabmuller, Martin",
 title = {{Algorithm W Step by Step}},
 year = "2006",
 link = "\url{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.65.7733&rep=rep1&type=pdf}",
 abstract =
 "In this paper we develop a complete implementation of the classic
 algoirhtm W for HinleyMilner polymorphic type inference in Haskell",
 paper = "Grab06a.pdf"
}

\end{chunk}

\index{Hoare, C.A.R}
\begin{chunk}{axiom.bib}
@article{Hoar87,
 author = "Hoare, C.A.R",
 title = {{An Overview of Some Formal Methods for Program Design}},
 journal = "Computer",
 year = "1987",
 volume = "20",
 number = "9",
 paper = "Hoar87.pdf"
}

\end{chunk}

\index{Bowen, Jonathan P.}
\index{Hinchey, Michael G.}
\begin{chunk}{axiom.bib}
@article{Bowe95,
 author = "Bowen, Jonathan P. and Hinchey, Michael G.",
 title = {{Seven More Myths of Formal Methods}},
 journal = "IEEE Software",
 volume = "12",
 number = "4",
 pages = "3441",
 year = "1995",
 abstract =
 "New myths about formal methods are gaining tacit acceptance both
 outside and inside the systemdevelopment community. The authors
 address and dispel these myths based on their observations of
 industrial projects. The myths include: formal methods delay the
 development process; they lack tools; they replace traditional
 engineering design methods; they only apply to software; are
 unnecessary; not supported; and formal methods people always use
 formal methods.",
 paper = "Bowe95.pdf",
 keywords = "printed"
}

\end{chunk}

\index{McBride, Conor}
\begin{chunk}{axiom.bib}
@phdthesis{Mcbr99,
 author = "McBride, Conor",
 title = {{Dependently Typed Functional Programs and their Proofs}},
 school = "University of Edinburgh",
 year = "1999",
 link = "\url{http://strictlypositive.org/thesis.pdf}",
 abstract =
 "Research in dependent type theories has, in the past, concentrated on
 its use in the presentation of theorems and theoremproving. This
 thesis is concerned mainly with the exploitation of the computational
 aspects of type theory for programming, in a context where the
 properties of programs may readily be specified and established. In
 particular, it develops technology for programming with dependent
 inductive families of datatypes and proving those programs correct. It
 demonstrates the considerable advantage to be gained by indexing data
 structures with pertinent characteristic information whose soundness
 is ensured by typechecking, rather than human effort.

 Type theory traditionally presents safe and terminating computation on
 inductive datatypes by means of elimination rules which serve as
 induction principles and, via their associated reduction behaviour,
 recursion operators. In the programming language arena, these appear
 somewhat cumbersome and give rise to unappealing code, complicated by
 the inevitable interaction between case analysis on dependent types
 and equational reasoning on their indices which must appear explicitly
 in the terms. Thierry Coquand's proposal to equip type theory directly
 with the kind of pattern matching notation to which functional
 programmers have become used to over the past three decades offers a
 remedy to many of these difficulties. However, the status of pattern
 matching relative to the traditional elimination rules has until now
 been in doubt. Pattern matching implies the uniqueness of identity
 proofs, which Martin Hofmann showed underivable from the conventiaonal
 definition of equality. This thesis shows that the adoption of this
 uniqueness as axiomatic is sufficient to make pattern matching
 admissible.

 A datatype's elimination rule allows abstraction only over the whole
 inductively defined family. In order to support pattern matching, the
 application of such rules to specific instances of dependent families
 has been systematised. The underlying analysis extends beyond
 datatypes to other rules of a similar second order character,
 suggesting they may have other roles to play in the specification,
 verification and, perhaps, derivation of programs. The technique
 developed shifts the specificity from the instantiation of the type's
 indices into equational constraints on indices freely chosen, allowing
 the elimination rule to be applied.

 Elimination by this means leaves equational hypotheses in the
 resulting subgoals, which must be solved if further progress is to be
 made. The firstorder unification algorithm for constructor forms in
 simmple types presented in [McB96] has been extended to cover
 dependent datatypes as well, yielding completely automated solution of
 a class of problems which can be syntactically defined.

 The justification and operation of these techniques requires the
 machine to construct and exploit a standardised collection of
 auxiliary lemmas for each datatype. This is greatly facilitated by two
 technical developments of interest in their own right:
 \begin{itemize}
 \item a more convenient definition of equality, with a relaxed
 formulation rule allowing elements of different types to be compared,
 but nonetheless equivalent to the usual equality plus the axiom of
 uniqueness
 \item a type theory, OLEG, which incorporates incomplete objects,
 accounting for their 'holes' entirely within the typing judgments and,
 novelly, not requiring any notion of explicit substitution to manage
 their scopes.
 \end{itemize}

 A substantial prototype has been implemented, extending the proof
 assistant LEGO. A number of programs are developed by way of
 example. Chiefly, the increased expressivity of dependent datatypes is
 shown to capture a standard firstorder unification algorithm within
 the class of structurally recursive programs, removing any need for a
 termination argument. Furthermore, the use of elimination rules in
 specifying the components of the program simplifies significantly its
 correctness proof.",
 paper = "Mcbr99.pdf",
 keywords = "printed"
}

\end{chunk}
diff git a/src/axiomwebsite/patches.html b/src/axiomwebsite/patches.html
index 85b1be5..b839d10 100644
 a/src/axiomwebsite/patches.html
+++ b/src/axiomwebsite/patches.html
@@ 5892,6 +5892,8 @@ books/bookvolbib add references
books/bookvolbib add references
20180113.01.tpd.patch
books/bookvolbib add references
+20180114.01.tpd.patch
+books/bookvol13 review Hoar87

1.9.1