Specifying syntax

About $\blacktriangleright$ Specifying syntax $\blacktriangleright$ Type theory Logic

Note

Introduction

In this entry, we elaborate on the $\frac{foo}{bar}$ scheme to specify well formed syntactic expressions.

In a type theory, in particular, we reason about systems of syntactical constructions.

$$\frac{A\hspace{.5cm}B}{C}$$ This two dimensional notation initially comes from calculi in Logic.

Examples

So for example, if $\mathrm{Type}$ is the most generic type in our system, and we want a theory with the rule that if $\sigma$ and $\tau$ are of type $\mathrm{Type}$, that then the symbols $\sigma\to\tau$ and $\sigma\times\tau$ also denote types, then we write this is $$\frac{\sigma:\mathrm{Type}\hspace{1cm}\tau:\mathrm{Type}}{\sigma\to\tau:\mathrm{Type}}$$ and $$\frac{\sigma:\mathrm{Type}\hspace{1cm}\tau:\mathrm{Type}}{\sigma\times\tau:\mathrm{Type}}$$ A merit of this formalization is that makes it hard to end up with wrong expression syntax like '$\land\land P$'. We can use typing systems to define syntax for computer programs, or logic, of cookie recipes. For example, if $P$ and $Q$ are propositions and $\land$ means “and”, then we could introduce a type of propositions $\mathrm{Prop}$ and formally express a rule how to form more propositions using these symbols: $$\frac{P:\mathrm{Prop}\hspace{1cm}Q:\mathrm{Prop}}{P\land Q:\mathrm{Prop}}$$

More examples

The following certainly give sensible expression in arithmetic: $$\frac{n:\mathbb{N}\hspace{1cm}m:\mathbb{N}}{n\cdot m:\mathbb{N}}$$ $$\frac{n:\mathbb{N}}{n+1:\mathbb{N}}$$ Or consider $$\frac{A=B:\mathrm{Prop}}{B=A:\mathrm{Prop}}$$ Note, again, that the above is about what constitutes as syntactically correct expression in a logic, while e.g. the statement $(A=B)\implies (A=B)$ is a statement within the logic itself. Indeed, we will have $\left((A=B)\implies (A=B)\right):\mathrm{Prop}$.

Even more examples

Here is an example for a derivation rule in proof theory. It expresses that if $p$ is a proof for the proposition $A\implies B$ and if $x$ is a proof for the proposition $A$, then we can proof that $B$ holds - and this proof is denoted by $p\,x$ $$\frac{p:(A\implies B)\hspace{1cm}x:A}{p\,x:B}$$

Here is an example from category theory. The following rule expresses that if $g$ is an arrow in the Hom-class ${\bf C}[A,B]$, and if $f$ is an arrow in the Hom-class ${\bf C}[B,C]$, then $f\circ g$ is an arrow in the Hom-class ${\bf C}[A,C]$. $$\frac{g:{\bf C}[A,B]\hspace{1cm}f:{\bf C}[B,C]}{f\circ g:{\bf C}[A,C]}$$

Here is an example of the theory simply typed lambda calculus. The following rule expresses that if $f$ is a function from $D$ in $C$, then for all $x$ in $D$, then $f$ applied to $x$ is in $C$: $$\frac{f:(D\to C)\hspace{1cm}x:D}{f\,x:C}$$

Or more formally still, the following assumes that from a context $\Gamma$ we can define spaces $D$ and $C$ and $D\to C$ and the first two are indeed viewed as domain and codomain. $$\frac{\Gamma\vdash f:(D\to C)\hspace{1cm}\Gamma\vdash x:D}{\Gamma\vdash f\ x:C}$$

Discussion

There are type theory with types and terms completely separated. If we let types have a type (like in the example with $\mathrm{Type}$ above), then types are also terms. If types functionally vary over terms we call the system dependently types and if the types vary over other types we have polymorphic types.

The most heavily studied type systems are those for computation. If you define the type former $\to$ and the associated term constructor (lambda-terms) and rewriting rules, you soon get a Turing complete system.

The first lambda-calculus was untyped (just ad hoc constructors for terms) and this can be seen as simple type theory with $(\sigma\to\sigma)\overset{!}{=}\sigma$.

The first thing one naturally adds after function types are product types $\times$ and unit type $1$. This lets you form lists and choose element. The natural categorical model is a cartesian closed category.

And then you might add sum types $+$ and the empty type $0$. This way you can merge types and code if-statements.

== Curry Howard correspondence ==
If capital letters like X are proposition (or types), low case letters like x are arguments (or terms of the type) and x:X means x is an argument for X (or x is a term of type X), then e.g.

(A and B) implies A
means „given two arguments a and b for A and B, you have an argument a for A.“

It can be proven and this exactly corresponds with the fact that there is indeed the program, a function f, that you’ll pass the list [a,b] and it returns the first entry. The proposition

A implies (A and B)

is not true, because you have no argument for B.
And indeed, you can’t possibly tell me a program f which takes an argument a for A and spits out a pair [a,b].

If „not X“ or „X is not the case“ is read as type of a function that you pass an x and it loops forever, then you can e.g. make sense of

(not A and not B) implies not (A or B)
which means „if A is not the case and B is no the case, then it’s not the care that ‚either A or B is true‚ is true.“
The program f must be constructed to take as input a pair [h,g] of ever-looping (i.e. never halting) functions h and g, and return function which can either take a term a or a term b, but in any case never halts. The function is
if you get a, apply the first entry h or the list [h,g] you got to it, and if you get a b, you apply the second entry g of the list [h,g] you got to it

Further, you can proof (try it)
A implies not(not A)

but you can’t proof
not(not A) implies A
(which is equivalent to the law of excluded middle)

The above goes as Curry-Howard correspondence and that’s how automated theorem proving works.

Parents

About