Here I write down some syntax ideas for WAT - Writing And Typing, a programming language with a dependent type system that can very easily be parsed to the standard mathematical syntax you wouldd encounter in physics papers.

In the best case, the type both makes for an interesting mathematical structure (an $\infty$-topos?) or embeds in one (the point is that ist semantics should be easy understandable (“easy” from my POV at least)) and is expressive enough to formalize such structures.

In view of using it as starting point in Outline, one could first do the latter and then explain this aspect of WAT as primary semantic example.

What's a faithful model of (to-be) WAT?
You can start with almost any dependently typed language, even if parts are missing, if you label/deny using/rule out the potentially dangerous possibilites/implementations there. What's left is something simple and right.

So todo: Make a list of what you actually WANT1111!!! –> What's the smallest theory permitting a semantic model that has all the stuff I like?



You must be able to express totally general mathematical spaces, points and funtions, get fully formal types and terms, while everyone who can read LaTeX must still be able to deduce what the code expresses. For Starters, type theory itself will be general/conservative and thus the project really lies in finding familar looking syntax for some Per-Marin-esc language (with Identity type??) and writing the parser into other systems.


If a scientist today writes a paper or Wiki entries, he must translate his notions (e.g. expressions like $H|\psi\rangle$ or $\sqrt{3}+1$, the later using literally 490 year old syntax) to formal code (e.g. '\sqrt{3}+1' in LaTeX). And when something needs to be simulated or evaluated on a computer, These notions must again be translate to some code in a general purpose programming language (e.g. 'sqrt(3)+1' in Python or MatLab).

Now we have this neat fact:

  • Notions expressed in the 1st sort of language, the standard mathematical syntax, can actually be fully captured by a dependent type system.

Beyond this fact, the purpose of WAT can be seen from two angles:

  • To provide a language of the 3rd sort that lets one skip the 2nd sort.
  • To provide a language of the 2nd sort, that subsumes a large part of the coding work done in the language of the 3rd sort - the text you use to descibe your theories already function as formal programm specification. More than that, the computer is enabled to write some of the code (i.e. generate terms for types, a.k.a. theorem proving).

Thoughts and Specification

See Primitive notions . WAT

In view of applications as a language: The primitive notions should be those that will surely stay at the basis the ITT Topic. As tools, I'd like to add features that implement some of the Syntetic Diff Geo (Schreiber)
  • The people who would have use for WAT would all know how to read and write LaTeX, but surely not a particular programming language, let alone a typed one. My first incentive is thus to have a syntax that very much like the formula-aspect of LaTeX, and write a parser to proper LaTeX on the one hand, and to an existing programming language on the other hand. Since I don't know shit about writing a type checker but still want use lambdas and study type systems, I guess I'll naturally try to parse into Haskell and (or) some dependent Variation of it.
  • One point that is arguably obvious but should maybe still be emphasized is that the syntax of WAT is determined by its type theory - there will be lots of notions in LaTeX/Haskell/etc. that don't make sense in WAT. Thus, parsing here means embedding some fragmet of WAT into some fragment of those languages.*
  • Putting the last two points together, at least at first the type system should consist of only the most generic constructs, that should be part of every good dependent language ($\Pi,\Sigma$, etc.) and leave space but not restrict to trends
  • I'm less interested in predicate logic than in complicated spaces for physics. I.e. there is hardly any emphasis on Theorem proving in the sense logic proper. But I'm very much interested in automatic code Generation - naively I'd say I want to develope into a direction with a dependent type System that is conceived so as to help to compiler do the work for you
  • While the expressiveness of the language should be high, the requirements of scope as a computing language and stuff like performance are low. It's more important to me to have a language that's relatively easy to understand - so easy that you can introduce physics theories directly using this language - than that the code compiles to something that runs fast.
  • It would be neat, but not a must, to have make the language have simple categorical semantics. At least it would be cool if a spcecific fraction of the language has such semantics, and if one would always be aware when and where the code breaks it - see Haskell type system for pointers to where the Haskell Fails in this regard

*In fact, a language with expressive type system like WAT necessarily let's one define data types that concretely contains the information for notions that are reprsented in another language. The data needed to plot a barchart can be expressed natively in WAT in some natural way (e.g. 'data bar = …' would be defined as lists of lists of some numbers or whatever) and the parser, tuned for some language, could infer not to ignore this object when translating the WAT code to a specified language such as MatLab (where bar is exactly translated into the syntactic Expression specifying the data that is Input for a bar-Chart there. Naturally, it would go to far to think about the capabilities of some languages to interact with other parts of the computational enviroment/internet.



Link to graph
Log In
Improvements of the human condition