Tiny optimising compiler

Welcome to the tutorial series that teaches how to write a tiny optimising compiler in haskell!

Start from:

  1. The source language.

  2. The parser for the language.

  3. The internal representation.

  4. The mem2reg transform that lands us into SSA.

  5. The constant folding transform that exploits SSA to "fold away" expressions which can be evaluated at compile time.

  6. The register allocation transform which allocates physical registers to the infinite virtual registers of our SSA form.

  7. The mipsasm final code generation pass which generates MIPS assembly.

    Background

I've wanted to write this for a while: a tiny optimising compiler for a small imperative ish language.

I want to show off modern compiler ideas, such as:

I currently have a parser for the source language, conversion to IR, then to SSA, and a semi-broken MIPS backend.

Goals

Non goals

Shows the correct way of doing a lot of things, in the sense of "engineering". I might pick the slower algorithm to compute a dominator tree, because I wish to emphasize the idea of the dominator tree. When a trade off is presented between simplicity and performance, I will pick simplicity.

Timeline

At this point, we have a "functioning" compiler. Now, we can extend the compiler or the language. I want to show off optimisations, so I will spend more time implementing optimisations

Note that we do not yet have functions in the language! let's add that.

If we get here, we can then add polyhedral abilities to the compiler. For this though, we would need to integrate with isl. Someone will need to write haskell bindings :).