Today we are going to begin discussing a topic that was a bit of an “a ha!” moment for me when first getting into functional programming. It sets out to solve a very difficult problem, and does so in a beautiful way. And like most things in functional programming, the problem is solved with lots of tiny pieces that glue together in interesting ways allowing you to create a massively complex machine that can do really powerful things.
The topic is parsers, and in particular parser combinators. Broadly speaking, we could define parsing as trying to take a blob of nebulous data, like say a string, data, user input, or even a URL request, and turn it into a more domain-specific, first class data type, like say a user model. When said in that way you can even imagine parsing as literally a function from “nebulous blob of data” to “well-structured data”, and so functional programming probably has a lot to say about this topic. But, this function, at first, is pretty intimidating to implement because we may need to do a lot of work in order to extract out meaningful data from it.
Parser combinators aim to break this problem into a lot of very specific parsers that do one job and do it well. And then it provides all of the high-level functions that allow us to glue lots of parsers together to get big parsers that can handle more and more complex data. The way in which we are describing this is akin to how we developed our composable randomness library, which we had many episodes (#30, #31, #32, #47, #48, #49, #50) on and open sourced. It allowed us to focus on a single unit of randomness and then build up lots of complex random generators by gluing them together in interesting ways. This style of solving a problem just keeps coming up in functional programming, and it is really powerful.