We consider a logic with the standard connectives, adopt the standard usage of brackets for separation of expressions and predicates, and we use the usual binding strengths in term construction.
The atomic propositions will be denoted with capital Latin letters here, e.g. $A,B,C,P,Q,...$. Formally, their semantic meaning is not at all relevant.
So the symbols of the logical language are those letters and $\land$, $\lor$, $\Rightarrow$, $\bot$ and $\Leftrightarrow$, $\neg$, $\top$ as well as the two symbols $($ and $)$.
The Latin Letters represent some claim ($R$ could be “It's raining right now”) and the logical symbols are taken to mean to “and”, “or”, “if-then”, the proposition “is absurd” and “if and only if” resp. “not the case that” and the proposition “it's always the case”.
We want to formally express what expressions count as syntactically correct propositions. To make such judgements, we consider well formed formulas to be terms of the type $\mathrm{Prop}$. Write $P : \mathrm{Prop}$ for “$P$ is a Proposition”.
At the same time, this gives us a notion of proof, see below.
Remark: We may consider a mathematical framework which includes a type of types $\mathrm{Type}$ and formalize “Proposition are a type” by $\mathrm{Prop}:\mathrm{Type}$, but thoughts along that line will certainly not be relevant as long as we only deal with propositions and no other things.
In propositional logic, if we know $A : \mathrm{Prop}$ as well as $B : \mathrm{Prop}$, then, per definition, we have that $(A\land B) : \mathrm{Prop}$. Here are all the valid expressions:
rule | $\large\frac{A\,:\,\mathrm{Prop}\hspace{.5cm}B\,:\,\mathrm{Prop}}{(A\land B)\,:\,\mathrm{Prop}}$ |
rule | $\large\frac{A\,:\,\mathrm{Prop}\hspace{.5cm}B\,:\,\mathrm{Prop}}{(A\lor B)\,:\,\mathrm{Prop}}$ |
rule | $\large\frac{A\,:\,\mathrm{Prop}\hspace{.5cm}B\,:\,\mathrm{Prop}}{(A\Rightarrow B)\,:\,\mathrm{Prop}}$ |
rule | $\large\frac{}{\bot\,:\,\mathrm{Prop}}$ |
We also define
$(A\Leftrightarrow B)\equiv ((A\Rightarrow B)\land(B\Rightarrow A))$ |
$(\neg A)\equiv (A\Rightarrow\bot)$ |
$\top\equiv\neg \bot$ |
For an example, we introduce three primitive propositions and form a new one: Take $A$ to mean “I need to pick up my aunt form the airport”, $B$ to mean “It's time for breakfast” and $C$ to mean “I must eat cereal”. The brackets are often discarded when unnecessary. The above rules can be pieced together, for example, to eventually make the judgment
$\left(\left((A\land B)\land (B\implies C)\right)\implies{C}\right)\,:\,\mathrm{Prop}$
This translates to the natural language sentence: “If it holds that 'I need to pick up my aunt form the airport' and 'it's time for breakfast', and if it also holds that if 'it's time for breakfast', then 'I must eat cereal', then it holds that I must eat cereal.”
Using the abbreviations $A$, $B$ and $C$ in the meta-language, we may write this shorter as
“Assuming $A\land B$ holds and also $B\Rightarrow C$ holds, it follows that $C$ holds.”
Let $A$, $B$ and $C$ be some claims and consider the last sentence.
We might take the following argument to be a proof of this statement:
If we have established that $A\land B$ holds, then in particular $B$ holds. If moreover we have established that $B\Rightarrow C$ holds, then, as $B$ is the condition $B\Rightarrow C$, we can put the two together to establish that $C$ holds.
We introduce notation to capture this sentence, which is briefly discussed in Logic: Formally, Derivations can be considered trees and the proof presented as
${\large\frac{{\large\frac{(A\land B)\ :\ \mathrm{True}}{B\ :\ \mathrm{True}}}(\land E_2)\hspace{.5cm}(B\Rightarrow C)\ :\ \mathrm{True}}{C\ :\ \mathrm{True}}}(\Rightarrow E)$
As derivations only go from true propositions to other true ones, we may drop the “$:\mathrm{True}$” and write
${\large\frac{{\large\frac{(A\land B)}{B}}(\land E_2)\hspace{.5cm}(B\Rightarrow C)}{C}}(\Rightarrow E)$
In this proof, the following derivation logical rules have been adopted and used (rules also written without “$:\mathrm{True}$”):
rule | ${\large\frac{X\land Y}{Y}}(\land E_1)$ |
resp.
rule | ${\large\frac{X\land Y}{X}}(\land E_2)$ |
rule | ${\large\frac{X\hspace{.5cm} X\Rightarrow Y}{Y}}(\Rightarrow E)$ |
A logical or formal mathematical theory is specified by specifying the logical language, the derivation rules and a collections of axioms. Such an axiom system would be $(A\land B)\ :\ \mathrm{True}$ and $B\ :\ \mathrm{True}$, from which we could follow $C\ :\ \mathrm{True}$.
Note however, that the fact that we can derive $C$ from $(A\land B)$ and $B\Rightarrow C$ should mean we can derive $\left((A\land B)\land (B\implies C)\right)\implies{C}$ even without axioms. The necessary derivations rules to formally enable us to do so are discussed blow.
Here is a more elaborate way to formulate derivations:
If we were to do proof theory, we'd want a proof to be a mathematical expression that is, in a systematic way, composed of proofs of it's more basic statements. We could assign a symbol to every claim that a proposition is established to be true. Then consider a proof for the above proposition to be the following:
If we have established that $A\land B$ holds ($p$), then in particular $B$ holds ($\pi_2\,p$). If moreover we have established that $B\Rightarrow C$ holds ($f$), then, as $B$ is the condition $B\Rightarrow C$, we can put the two together to establish that $C$ holds ($f\,\pi_2\,p$).
This would read
${\large\frac{{\large\frac{p\,:\,(A\land B)}{\pi_2\,p\,:\,B}}(\land E_2)\hspace{.5cm} f\,:\,(B\Rightarrow C)}{f\,\pi_2\,p\,:\,C}}(\Rightarrow E)$
Remark: Note how the $\Rightarrow$-Elimination rule acts like the application of a function with of type $X\Rightarrow Y$, while the $\land$-Elimination rules acts like the application of projections operators on $X\times Y$.
The other rules are only slightly more complicated to write down because they involve the use of working hypotheses.
rule | ${\large\frac{X\lor Y\hspace{.5cm} [X]\ Z\hspace{.5cm} [Y]\ Z}{Z}}(\lor E)$ |
Here $[X]\ Z$ means that the tree branch over $Z$ started out with $X$, but in the end we don't list $X$ in the requirements of the derivation.
(This rules works like the function-application function for a function whose domain is a direct sum $X+Y$ of two domains)
The introduction rules are
rule | ${\large\frac{X\hspace{.5cm}Y}{X\land Y}}(\land I)$ |
rule | ${\large\frac{X}{X\lor Y}}(\lor I_1)$ |
rule | ${\large\frac{Y}{X\lor Y}}(\lor I_2)$ |
rule | ${\large\frac{[X]Y}{X\Rightarrow Y}}(\land I)$ |
(They are like pairing of values, considering a bigger domain and function abstraction, respectively.)
There are also two simple rules
rule | ${\large\frac{X}{X}}(id)$ |
rule | ${\large\frac{\bot}{X}}(\bot E)$ |
(The first is just us allowing to repeat our scribblings and the second one soft of acts like starting a non-terminating process which ends our game.)
It's worth mentioning two often used theorems.
rule | ${\large\frac{X\hspace{.5cm}\neg X}{Y}}(\neg E)$ |
and
rule | ${\large\frac{[X]Y\hspace{.5cm}[X]\neg Y}{\neg X}}(\neg E)$ |
${\large\frac{(A\lor B)\hspace{.5cm}(A\Rightarrow C)\hspace{.5cm}(B\Rightarrow C)}{C}}(Y)$
becomes
$\forall A,B,C.\,(A\Rightarrow C)\Rightarrow(A\Rightarrow B)\Rightarrow C$
7707239Classical predicate logic maintains the principle of excluded middle (LEM) as axiomatic, so what are the arguments for why mathematics should privilege a different logical basis?
As the other anon points out, it's not a question of disregarding richer and “more careless” axiomatic frameworks, it's a matter of application. If you reason about a subject that includes the statement
It's true that today I'll eat at MacDonalds OR it's false that today I'll eat at MacDonalds
i.e. $M \lor \neg M$ where the semantics (I should maybe say 'subject', as semantics is used more technically too) of $M$ are “today I'll eat at MacDonalds ”. then adoption LEM is likely what you want to do. If you reason about a subject that includes the statement
It's the case that the talking stone is blue OR it's not the case that the talking stone is blue
I'm not sure anymore.
If you drop LEM, then your theory necessarily proves less theorems. What's nice is that you can then also adopt new non-constructive axioms, which if together with LEM would have produces an inconsistency. I.e. there are interesting theories which you can't axiomize in classical logic because of this. Pic related is a theory of smooth closeness which breaks down by adding the nonconstructive LEM https://en.wikipedia.org/wiki/Synthetic_differential_geometry
The message here is that by adopting this rule of inference, you actually choose the math you work with: given the axioms which are about the semantial things proper (vector space laws, MacDonalds laws, etc.) you also define that theories by the rules of inference you use. I.e. you narrowed down the range of possible theories by doing actively deciding the rules of logic. That's why it's interesting to see which part of functional analysis, say, does still hold in the weaker world of reasoning.
Finally: Why drop LEM and not some other law in classical logic? Why is LEM termed the nonconstructive one. (cont.)
An appealing aspect of intuitionistic logic is that is has faithful semantics (in the sense of: let $M$ mean that „today I'll eat at MacDonals.“) in something real, namely actions.
https://en.wikipedia.org/wiki/Brouwer%E2%80%93Heyting%E2%80%93Kolmogorov_interpretation
https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence
This logic is precisely the one which you can use to describe a cooking recopy or a programming language:
Read $A \land B$ (interpretation in logic: „it’s true that A holds and it’s true that B holds“) as
I have a proof/reason to believe/argument for A and I have a proof/reason to believe/argument B
Read $A \lor B$ in the same way. Read $A \to B$ (in logic: „if A is true, then B is true„) as
If I have proof/reason to believe/argument for A, then I can conceive a proof/reason to believe/argument
and $\neg A$ (in logic „A is false“) as
If I have proof/reason to believe/argument for A, then something’s wrong because that’s a contradiction and doesn’t make sense
The last connective is properly defined as $\neg A := A \to \bot$
Now a statement like $(A \land B) \to A$ is intutionistically true, because if I have a reason to believe A and one to believe B, then in particular I have a reason to believe A. Computationally, this „proof I just give corresponds to a function $f$ of type $(A \times B) \to A$ i.e. $f : (A \times B) \to A$ namely $f = \langle a, b\rangle \mapsto a$ or written differently, $f = p \mapsto \pi_1(p)$, where $\pi_l$ is the left projection operation.
A reason g not to believe A, i.e. $g: \neg A$ or $g: A\to \bot$ can be read as a function that takes an argument a and A but never terminates. g(a) is looping on your hard drive forever.
The statement $A \to \neg ( \neg A )$ is also true: Having a reason to believe A means that also having a reason not to believe A would already be a contradiction. The computational proof $h : A \to ( (A \to \bot) \to \bot) $ is simple too: $h = a \mapsto ( H \mapsto H(a) )$ Given an argument ‚a‘ and then a looping function $ H : A\to \bot $, plugging in ‚a‘ into H will trigger an unending loop - the type matches.
But $(A\to \bot) \to \bot \to A$ is constructively not true. To logical argument is this: Merely knowing that ‚a reason to believe A is absurd‘ would be absurd - this in itself is not a constructive argument for A. Maybe the computational semantics is more clear here: Asking for a function of type $( ( A \to \bot ) \to \bot) \to A$ is asking for way to construct a term ‚a‘ using a functional which takes looping functions in $A \to \bot$ as argument but never terminates. How would a terms ‚a‘ ever come from this?? It doesn’t.
A more complicated but not too hard example, to conclude this, would be $ ( \neg A \land \neg B ) \implies \neg ( A \lor B ) $
If ‚A is false‘ as well as ‚B is false‘, then neither is 'A or B'.
or $ ( ( A \to \bot) \times ( B \to \bot ) ) \to (A+B) \to \bot $ where + is the direct sum. If next to the projection operators you introduce the programming concept of if-clauses now (as you have them in C++ or Phython), you’ll be able to proof this claim.
Indeed, provability and programability coincide now, that’s the pretty thing.
7708352
If by “analysist” you mean a guy who's studying classical analysis, then the question has probably no answer. Dropping non-constructive axioms of your theories becomes relevant, for example, whenever you got to implement stuff. As you can't even list the element of $\mathbb R$, theorems in classical analysis will be vacuous for such realization. The good news is that people have spun the spiderwebs of theories like analysis in a constructive fashion long ago, see for example all the constrictive theorems that aggregate around the unprovable intermediate value theorem
https://en.wikipedia.org/wiki/Constructive_analysis#Examples
Syntetic analysis as in the pic from the book by Kock is relevant because the requirements for the theory are such that certain topoi give you internal analysis as a gift. That is you consider some topos, check if it has this and that property and then it might be implied that that topos really is about calculus/contains a theory of calculus and you can import all the theorems you already know - I mean that's the nice feat of category theory in general.
The category of sets $A, B, C,...$ (=objects) and functions $f, g ,h,...$ (=arrows) is a topos and thus has cartesian product $A\times B$ (=the categorical product) and function spaces $B^A$ (=internal, setty realizations of the arrow class from $A$ to $B$, which may be written $A \to B$) A theorem of sets is that the function space $ (A\times B) \to C $ is isomorphic to $ (B\times A) \to C $ as well as $ A \to C^B $ and $ B \to C^A $ E.g. for A=B=C the reals the first space contains $ \langle a,b \rangle \mapsto \sin(a)+3b $ which you can systematically map to the function $ \langle b,a \rangle \mapsto \sin(a)+3b $ in the second space, or the function $ a \mapsto (b \mapsto \sin(a)+3b) $ or $ b \mapsto (a \mapsto \sin(a)+3b) $
The axioms of a topos have more than the sets as model, for example it provides an (intuitionistic logic), where the iso above means „A and B being true implies C“ is equivalent to „A being true implies B being true implies C“
Or you can take the classical identifications of the set theoretic operations as numerical operations. Forgetting what your finite sets are about, the above relation is the basic identity of numbers $ c^{a·b} = {c^a}^b $ In the other direction, the relation $ c^{a+b} = c^a·c^b $ provable in a topic, at the same time tells you for the categories of sets (where + is the disjoint/tagged union) that the function space $ (A+B) \to C $ is iso (i.e. in bijection to) to $ (A\to C) \times (B\to C) $ In logic, „one of A or B being true already implies C“ is equivalent to „A implies C and also B implies C“
In the other direction, a derivation rule like modus p. „From A and A implies B follows B“ being true guarantees you that the function space $(A\times B^A)\to B$ isn’t empty Indeed, the evaluation function $ eval (a,f) := f(a) $ works no matter which sets A and B are.
What I want to say is this: Formulating analysis synthetically in a basic object like a topos (as opposed to something hard to construct like R), makes theorems of analysis more than theorems of just analysis. And secondly, if you know that analysis works if a topos has this and that axioms, then you can further build your theory (like quantum field theory, kek) in such a setting without restricting you to muh reals and muh epsilons. Although the reals are nice of course.
When it comes to classically interpreted propositions, classical propositional logic and intuitionistic propositional logic have the same proof strength (although proving theorems in classical logic might be more straight forward). This is because if the former proves $P$, then the latter always proves classically equivalent statements such as $\neg\neg P$ (and the converse of this is trivial). So to prove any (classically provable) theorem $P$, you can start out and stick to the intuitionistic framework the whole time, up until the end where, having proven $\neg\neg P$ intuitionistically, you apply an non-constructive (purely classical) translation such as $\neg\neg P\to P$.
This can be extended to a theory with existential quantifier. It can also be extended to the universal quantifier, except the translaion is more intricate.
Using the classical equialence $P\leftrightarrow \neg\neg P$ and $\neg(P\land Q)\leftrightarrow(\neg P\lor\neg Q)$, we get the classical equivalences
$(A\lor\neg A)$
$\leftrightarrow (\neg\neg A\lor \neg\neg A)$
$\leftrightarrow \neg(\neg A\land \neg\neg A)$
$\leftrightarrow \neg\neg\neg(\neg A\land \neg\neg A)$
The last two can be proven constructively.
In fact, take any $A,B$ and observe that
$((A\to B)\land((A\to B)\to B))\to B$
“If it's the case that '$A$ implies $B$' and also that '$A$ implies $B$' implies $B$, then $B$ is the case.”
Under the BHK interpretation (Curry-Howard), this reads“
“Given any 'function to $B$', that we may call $f$, as well as a function $F$ from a 'functions to $B$' to $B$, you can construct an element of $B$.”
The proof / the element is of course $F(f)$. In type theory, that judgement is
$\lambda x.(\pi_2x)(\pi_1 x)\ :\ (A\to B)\times((A\to B)\to B)\to B$
where in the above notation $x=(f,F)$ so that the lambda expression denotes the function $(f,F)\mapsto F(f)$.
If we now consider $B$ to be absurdity resp. the empty type $\bot$, then we have proof resp. a term of
$(A\to\bot)\times((A\to\bot)\to\bot)\to\bot$
which using $\neg X \equiv X\to\bot$ and $\land\equiv\times$ reads
$\neg(\neg A\land\neg\neg A)$
“If it's the case that 'A leads to an absurdity' and also that 'A leads to an absurdity' leads to an absurdity, then this leads to an absurdity.”
QED
The classically equivalent
$\neg\neg A\lor \neg\neg\neg A$
means
“Either 'A leading to an absudity is absurd' or 'A leading to an absudity is absurd' is absurd.”
and
$A\lor\neg A$.
means
“Either A holds or it leads to an absurdity.”
Wikipedia: Logical_connective, Double-negation translation