Monday, August 13, 2007

Fun Functions: Flatten

This post is going to be about a function which may not seem like much, it takes a tree and returns the elements of the tree in a list the way an inorder traversal would visit them. While this is not an uncommon task in programming it is so mundane that it hardly deserves a blog post on its own. So why am I writing this? The thing that have captured me is a particular implementation of this function. Let us first set the scene. This blog post is a literate Haskell file and you should be able so save it to a file and run it yourself.
> module Flatten where
> import Test.QuickCheck
> data Tree a = Node (Tree a) a (Tree a) | Leaf deriving Show
So we have a data type for trees, it's the standard one. Nothing surprising here. The scene is set. How would we go about implementing flatten then? There are in fact a whole number of ways to implement it and I'm going to show several of them. The last version will be my favorite one and after having seen all the other versions I hope you will appreciate it as I do. Let us then start with something rather naive. A straightforward recursive function which turns a tree into a list:
> flatten1 :: Tree a -> [a]
> flatten1 Leaf           = []
> flatten1 (Node t1 a t2) = flatten1 t1 ++ [a] ++ flatten1 t2
That's very concise and easy to understand. But this solution has a slight problem. It takes quadratic time. This is due to the fact that ++ take linear time in it's left argument. It is a well known problem with many solutions. The standard solution, due to John Hughes, is to represent lists as functions from lists to lists. This way we get constant time concatenation. Incidentally, some people seem to think that this version is fine. It is used on the type level by alpheccar in his blog. Implementing lists the Hughes way gives us our second variation of flatten.
> flatten2 :: Tree a -> [a]
> flatten2 tree = help2 tree []
> help2 :: Tree a -> [a] -> [a]
> help2 Leaf           = id
> help2 (Node t1 a t2) = help2 t1 . (a :) . help2 t2
Hmmmm. I don't know what you think of this version but I don't like it. For two reasons. It uses higher order programming when it isn't necessary, and that just obfuscates things. Secondly, it needs a helper function. It would be nice if we could do without that. Let's start by tackling the first problem of higher order programming. One way to remove the higher orderness is to give the function help2 an extra argument (called eta expansion, if that means anything to you). This gives us the following version.
> flatten3 :: Tree a -> [a]
> flatten3 tree = help3 tree []
> help3 :: Tree a -> [a] -> [a]
> help3 Leaf           rest = rest
> help3 (Node t1 a t2) rest = help3 t1 (a : help3 t2 rest)
I call the extra argument rest because it contains the rest of the elements that we should put in the list. This solution still uses a helper function which is a bit of a nuisance but unfortunately I don't know an easy way to remove it. Instead I'm going to present another way to think about flattening that will take us down another road. How would you go about implementing an inorder traversal with a while loop instead of recursion? The solution would be to use a stack to remember the parts of the tree that we haven't visited yet. We can use the same idea to write a new version of flatten, using a list as the stack. The top of the stack will be the part of the tree that we are visiting at the moment. We will need a little trick to remember when we have visited a subtree and that is to simply remove it from its parent. The final code looks like this.
> flatten4 :: Tree a -> [a]
> flatten4 tree = help4 [tree]
> help4 :: [Tree a] -> [a]
> help4 [] = []
> help4 (Leaf : ts) = help4 ts
> help4 (Node Leaf a t : ts) = a : help4 (t : ts)
> help4 (Node t1 a t2 : ts) = help4 (t1 : Node Leaf a t2 : ts)
This version is pretty clever. It takes apart the tree and spreads it out into the list until there is nothing left of the tree. But one might argue that this version is too clever. It is certainly not easy to understand from the code. Plus it uses a helper function, something that we have been trying to get away from. The final version that I will show builds upon the idea that we can represent the list of trees from our previous version as simply a tree. We are going to use the input tree as the stack and modify it as we go along. It goes like this.
> flatten5 :: Tree a -> [a]
> flatten5 Leaf = []
> flatten5 (Node Leaf a t) = a : flatten5 t
> flatten5 (Node (Node t1 a t2) b t3) = flatten5 (Node t1 a (Node t2 b t3))
What happens in the third equation is that the tree is restructured so that it becomes a linear structure, just like a list. It is then simple to read off the element in the second equation. This is my favorite version. It's short, without helper functions and no higher order stuff. Plus it is efficient. Now, you might complain that it does more allocations that some of the previous versions. Indeed we have to allocate new trees in the third case. But note that that case is tail recursive. That means that it doesn't use any stack. It is possible to implement this version of the function so that it doesn't any more space than any of the other versions above. The control stack is encoded into the tree. One additional thing about this version is that it's termination is not entirely obvious. This function will fool several termination checkers that I know of. In particular, in the third equation the argument tree in recursive call is not smaller than the input tree. This is a criterion that many termination checker use. To understand that the function terminates you must instead see that it is the left subtree of the input that gets smaller together with the fact that trees are finite. To me this is an extra bonus. You might think otherwise. Finally, I have some code to test that all the version I've given above are equal.
> instance Arbitrary a => Arbitrary (Tree a) where
>   arbitrary = sized arbTree
>    where arbTree 0 = return Leaf
>          arbTree n | n>0 = frequency $
>                 [(1, return Leaf)
>                 ,(4, do elem <- arbitrary
>                         t1   <- arbTree (n `div` 2)
>                         t2   <- arbTree (n `div` 2)
>                         return (Node t1 elem t2))]
> prop_flatten2 t = flatten1 t == flatten2 (t ::Tree Int)
> prop_flatten3 t = flatten1 t == flatten3 (t ::Tree Int)
> prop_flatten4 t = flatten1 t == flatten4 (t ::Tree Int)
> prop_flatten5 t = flatten1 t == flatten5 (t ::Tree Int)
> runTests = mapM_ quickCheck [prop_flatten2,prop_flatten3,
>                              prop_flatten4,prop_flatten5]

Learning Monads in 1996

Given the plethora of monad tutorials these days it's hard to imagine how anybody could have learned about this concept during the 90's. I learned most of what I know about monads in 1996, back in the days when even researchers had a hard time understanding the list monad. This post will not be another monad tutorial. I'm simply going to explain how I managed to become friendly with monads in a time where monad tutorials where very scarce. So, why did I start looking into monads in the first place? In 1995 I started my undergraduate studies at Chalmers and the first language we were taught was Gofer, a now extinct dialect of Haskell. When the time came for teaching us I/O they didn't show us the monadic way of doing things. Apparently our teacher (who by the way was not a functional programmer, a strange fact considering that Chalmers abound with functional programmers) thought that the concept of monads was too scary and complicated and taught us continuation I/O, something which I experienced as painfully horrible and utterly confusing. But some of us students had heard about monads and how they were supposed to be they way of the future when it came to I/O and other things. They sounded really cool, and hey, they couldn't possibly be worse than continuation I/O right? So I set my mind to learning monads. I tried to read everything that was available on the subject which wasn't much at the time. There were two papers which I spent a considerable time with: "The essence of functional programming" and "Monads for functional programming" by Phil Wadler. I implemented every single bit of code in those papers, tore the code to pieces and put it back again. I started implementing monadic parser libraries based on the paper "Monadic parser combinators" by Hutton and Meijer. Trying out different implementations and combinators I wrote variation after variation of these libraries. I don't know how many of parser libraries I implemented in the end. I've lost count. Slowly I was starting to grasp monads as I played around with them. After some time I was even able to digest and understand the paper "Monad Transformers and Modular Interpreters" but it certainly didn't happen the first time I read it. What finally pushed me over the edge was when a friend asked me how to combine different monads. I didn't know the answer and for some reason it kind of bugged me that I couldn't answer his question. But I had a vague recollection of ideas in the above paper. Determined to understand this, I sat down and started to hack on monad transformers and didn't finish until I finally had a rudimentary understanding of them, several hours later. Monadic I/O turned out to be a breeze to learn thanks to the nice "Monadic I/O in Haskell 1.3" by Kevin Hammond and Andrew Gordon. And there you have it. The way I learned about monads. No magic, just hard work. I'm a slow learner and you can probably get comfortable with monads much faster than I. But I don't think you'll find any method which is significantly different from mine. What makes me say that? Why are monads such a difficult concept to learn? The conclusion I've come to is that the difficulty lie in the fact that it's an abstraction. Abstract things are simply hard, especially for people who have little experience with abstractions before. As an example take an abstraction like the concept of a group. What is a group? It's a set, call it G, and a binary operator on G which is associative and has inverses and an identity element. For anyone used to abstractions this is a perfectly fine definition. But not so for your average man on the street, or even programmer. They will react something like this: "Ok, I understand the definition. But what *is* a group? I mean, in *reality*?" I remember having similar thoughts myself. The fact that a group is not a concrete thing is a great confusion. In my experience this only settles after the person has gained some experience with lots of examples of what a group is, and a few examples of things that fail to be a group. By then most people will have gotten pretty comfortable with the concept. It's the same thing with monads. They're an abstract concept which will take time to digest. And this digestion is best done by playing around with various examples of monads and occasionally meet examples of data types which are not monads and for what reason they fail. I don't think there are any shortcuts to learning abstract concepts. But I think that one can become better at it and it certainly helps to have a good tutorial when learning a subject, such as monads.