Compilers are one of the most interesting and essential advances in the field of computer science, which form the backbone of every piece of software that has ever been produced in the world today.
Functional programming is good for many things, but in particular, writing compilers is an extremely powerful use case for it. In this lecture, we went over the historical context behind compilers, the theory behind how they operate, and the bird's-eye view for the various steps behind implementing a compiler.
We went specifically into the phases of a compiler, lexing, parsing, IR generation, optimization, and code generation. Each of these presents their own theory and challenges, but many benefit from the kind of rich data that can be described by algebraic datatypes.
Ultimately, programs are themselves representable by data structures, and a program really is just a fancy kind of tree. Compilers must be able to read and interpret programs, so compilers can be seen as really nothing more than a series of pure transformations on trees, which functional programming excels at.