Monadic Recursion Schemes

I have another few posts that I’d like to write before cluing up the whole recursion schemes kick I’ve been on. The first is a simple note about monadic versions of the schemes introduced thus far.

In practice you often want to deal with effectful versions of something like cata. Take a very simple embedded language, for example (“Hutton’s Razor”, with variables):

{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE LambdaCase #-}

import           Control.Monad              ((<=<), liftM2)
import           Control.Monad.Trans.Class  (lift)
import           Control.Monad.Trans.Reader (ReaderT, ask, runReaderT)
import           Data.Functor.Foldable      hiding (Foldable, Unfoldable)
import qualified Data.Functor.Foldable      as RS (Foldable, Unfoldable)
import           Data.Map.Strict            (Map)
import qualified Data.Map.Strict            as Map

data ExprF r =
    VarF String
  | LitF Int
  | AddF r r
  deriving (Show, Functor, Foldable, Traversable)

type Expr = Fix ExprF

var :: String -> Expr
var = Fix . VarF

lit :: Int -> Expr
lit = Fix . LitF

add :: Expr -> Expr -> Expr
add a b = Fix (AddF a b)

(Note: Make sure you import ‘Data.Functor.Foldable.Foldable’ with a qualifier because GHC’s ‘DeriveFoldable’ pragma will become confused if there are multiple ‘Foldables’ in scope.)

Take proper error handling over an expression of type ‘Expr’ as an example; at present we’d have to write an ‘eval’ function as something like

eval :: Expr -> Int
eval = cata $ \case
  LitF j   -> j
  AddF i j -> i + j
  VarF _   -> error "free variable in expression"

This is a bit of a non-starter in a serious or production implementation, where errors are typically handled using a higher-kinded type like ‘Maybe’ or ‘Either’ instead of by just blowing up the program on the spot. If we hit an unbound variable during evaluation, we’d be better suited to return an error value that can be dealt with in a more appropriate place.

Look at the algebra used in ‘eval’; what would be useful is something like

monadicAlgebra = \case
  LitF j   -> return j
  AddF i j -> return (i + j)
  VarF v   -> Left (FreeVar v)

data Error =
    FreeVar String
  deriving Show

This won’t fly with cata as-is, and recursion-schemes doesn’t appear to include any support for monadic variants out of the box. But we can produce a monadic cata - as well as monadic versions of the other schemes I’ve talked about to date - without a lot of trouble.

To begin, I’ll stoop to a level I haven’t yet descended to and include a commutative diagram that defines a catamorphism:

cata

To read it, start in the bottom left corner and work your way to the bottom right. You can see that we can go from a value of type ‘t’ to one of type ‘a’ by either applying ‘cata alg’ directly, or by composing a bunch of other functions together.

If we’re trying to define cata, we’ll obviously want to do it in terms of the compositions:

cata:: (RS.Foldable t) => (Base t a -> a) -> t ->  a
cata alg = alg . fmap (cata alg) . project

Note that in practice it’s typically more efficient to write recursive functions using a non-recursive wrapper, like so:

cata:: (RS.Foldable t) => (Base t a -> a) -> t ->  a
cata alg = c where c = alg . fmap c . project

This ensures that the function can be inlined. Indeed, this is the version that recursion-schemes uses internally.

To get to a monadic version we need to support a monadic algebra - that is, a function with type ‘Base t a -> m a’ for appropriate ‘t’. To translate the commutative diagram, we need to replace ‘fmap’ with ‘traverse’ (requiring a ‘Traversable’ instance) and the final composition with monadic (or Kleisli) composition:

cataM

The resulting function can be read straight off the diagram, modulo additional constraints on type variables. I’ll go ahead and write it directly in the inline-friendly way:

cataM
  :: (Monad m, Traversable (Base t), RS.Foldable t)
  => (Base t a -> m a) -> t ->  m a
cataM alg = c where
  c = alg <=< traverse c . project

Going back to the previous example, we can now define a proper ‘eval’ as follows:

eval :: Expr -> Either Error Int
eval = cataM $ \case
  LitF j   -> return j
  AddF i j -> return (i + j)
  VarF v   -> Left (FreeVar v)

This will of course work for any monad. A common pattern on an ‘eval’ function is to additionally slap on a ‘ReaderT’ layer to supply an environment, for example:

eval :: Expr -> ReaderT (Map String Int) (Either Error) Int
eval = cataM $ \case
  LitF j   -> return j
  AddF i j -> return (i + j)
  VarF v   -> do
    env <- ask
    case Map.lookup v env of
      Nothing -> lift (Left (FreeVar v))
      Just j  -> return j

And just an example of how that works:

> let open = add (var "x") (var "y")
> runReaderT (eval open) (Map.singleton "x" 1)
Left (FreeVar "y")
> runReaderT (eval open) (Map.fromList [("x", 1), ("y", 5)])
Right 6

You can follow the same formula to create the other monadic recursion schemes. Here’s monadic ana:

anaM
  :: (Monad m, Traversable (Base t), RS.Unfoldable t)
  => (a -> m (Base t a)) -> a -> m t
anaM coalg = a where
  a = (return . embed) <=< traverse a <=< coalg

and monadic para, apo, and hylo follow in much the same way:

paraM
  :: (Monad m, Traversable (Base t), RS.Foldable t)
  => (Base t (t, a) -> m a) -> t -> m a
paraM alg = p where
  p   = alg <=< traverse f . project
  f t = liftM2 (,) (return t) (p t)

apoM
  :: (Monad m, Traversable (Base t), RS.Unfoldable t)
  => (a -> m (Base t (Either t a))) -> a -> m t
apoM coalg = a where
  a = (return . embed) <=< traverse f <=< coalg
  f = either return a

hyloM
  :: (Monad m, Traversable t)
  => (t b -> m b) -> (a -> m (t a)) -> a -> m b
hyloM alg coalg = h
  where h = alg <=< traverse h <=< coalg

These are straightforward extensions from the basic schemes. A good exercise is to try putting together the commutative diagrams corresponding to each scheme yourself, and then use them to derive the monadic versions. That’s pretty fun to do for para and apo in particular.

If you’re using these monadic versions in your own project, you may want to drop them into a module like ‘Data.Functor.Foldable.Extended’ as recommended by my colleague Jasper Van der Jeugt. Additionally, there is an old issue floating around on the recursion-schemes repo that proposes adding them to the library itself. So maybe they’ll turn up in there eventually.

Sorting Slower with Style

I previously wrote about implementing merge sort using recursion schemes. By using a hylomorphism we could express the algorithm concisely and true to its high-level description.

Insertion sort can be implemented in a similar way - this time by putting one recursion scheme inside of another.

yo dawg, we heard you like morphisms

Read on for details.

Apomorphisms

These guys don’t seem to get a lot of love in the recursion scheme tutorial du jour, probably because they might be the first scheme you encounter that looks truly weird on first glance. But apo is really not bad at all - I’d go so far as to call apomorphisms straightforward and practical.

So: if you remember your elementary recursion schemes, you can say that apo is to ana as para is to cata. A paramorphism gives you access to a value of the original input type at every point of the recursion; an apomorphism lets you terminate using a value of the original input type at any point of the recursion.

This is pretty useful - often when traversing some structure you just want to bail out and return some value on the spot, rather than continuing on recursing needlessly.

A good introduction is the toy ‘mapHead’ function that maps some other function over the head of a list and leaves the rest of it unchanged. Let’s first rig up a hand-rolled list type to illustrate it on:

{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE TypeFamilies #-}

import Data.Functor.Foldable

data ListF a r =
    ConsF a r
  | NilF
  deriving (Show, Functor)

type List a = Fix (ListF a)

fromList :: [a] -> List a
fromList = ana coalg . project where
  coalg Nil        = NilF
  coalg (Cons h t) = ConsF h t

(I’ll come back to the implementation of ‘fromList’ later, but for now you can see it’s implemented via an anamorphism.)

Example One

Here’s ‘mapHead’ for our hand-rolled list type, implemented via apo:

mapHead :: (a -> a) -> List a -> List a
mapHead f = apo coalg . project where
  coalg NilF        = NilF
  coalg (ConsF h t) = ConsF (f h) (Left t)

Before I talk you through it, here’s a trivial usage example:

> fromList [1..3]
Fix (ConsF 1 (Fix (ConsF 2 (Fix (ConsF 3 (Fix NilF))))))
> mapHead succ (fromList [1..3])
Fix (ConsF 2 (Fix (ConsF 2 (Fix (ConsF 3 (Fix NilF))))))

Now. Take a look at the coalgebra involved in writing ‘mapHead’. It has the type ‘a -> Base t (Either t a)’, which for our hand-rolled list case simplifies to ‘a -> ListF a (Either (List a) a)’.

Just as a reminder, you can check this in GHCi using the ‘:kind!’ command:

> :set -XRankNTypes
> :kind! forall a. a -> Base (List a) (Either (List a) a)
forall a. a -> Base (List a) (Either (List a) a) :: *
= a -> ListF a (Either (List a) a)

So, inside any base functor on the right hand side we’re going to be dealing with some ‘Either’ values. The ‘Left’ branch indicates that we’re going to terminate the recursion using whatever value we pass, whereas the ‘Right’ branch means we’ll continue recursing as per normal.

In the case of ‘mapHead’, the coalgebra is saying:

  • deconstruct the list; if it has no elements just return an empty list
  • if the list has at least one element, return the list constructed by prepending ‘f h’ to the existing tail.

Here the ‘Left’ branch is used to terminate the recursion and just return the existing tail on the spot.

Example Two

That was pretty easy, so let’s take it up a notch and implement list concatenation:

cat :: List a -> List a -> List a
cat l0 l1 = apo coalg (project l0) where
  coalg NilF = case project l1 of
    NilF      -> NilF
    ConsF h t -> ConsF h (Left t)

  coalg (ConsF x l) = case project l of
    NilF      -> ConsF x (Left l1)
    ConsF h t -> ConsF x (Right (ConsF h t))

This one is slightly more involved, but the principles are almost entirely the same. If both lists are empty we just return an empty list, and if the first list has at most one element we return the list constructed by jamming the second list onto it. The ‘Left’ branch again just terminates the recursion and stops everything there.

If both lists are nonempty? Then we actually do some work and recurse, which is what the ‘Right’ branch indicates.

So hopefully you can see there’s nothing too weird going on - the coalgebras are really simple once you get used to the Either constructors floating around in there.

Paramorphisms involve an algebra that gives you access to a value of the original input type in a pair - a product of two values. Since apomorphisms are their dual, it’s no surprise that you can give them a value of the original input type using ‘Either’ - a sum of two values.

Insertion Sort

So yeah, my favourite example of an apomorphism is for implementing the ‘inner loop’ of insertion sort, a famous worst-case comparison-based sort. Granted that insertion sort itself is a bit of a toy algorithm, but the pattern used to implement its internals is pretty interesting and more broadly applicable.

This animation found on Wikipedia illustrates how insertion sort works:

CC-BY-SA 3.0 Swfung8

We’ll actually be doing this thing in reverse - starting from the right-hand side and scanning left - but here’s the inner loop that we’ll be concerned with: if we’re looking at two elements that are out of sorted order, slide the offending element to where it belongs by pushing it to the right until it hits either a bigger element or the end of the list.

As an example, picture the following list:

[3, 1, 1, 2, 4, 3, 5, 1, 6, 2, 1]

The first two elements are out of sorted order, so we want to slide the 3 rightwards along the list until it bumps up against a larger element - here that’s the 4.

The following function describes how to do that in general for our hand-rolled list type:

coalg NilF        = NilF
coalg (ConsF x l) = case project l of
  NilF          -> ConsF x (Left l)
  ConsF h t
    | x <= h    -> ConsF x (Left l)
    | otherwise -> ConsF h (Right (ConsF x t))

It says:

  • deconstruct the list; if it has no elements just return an empty list
  • if the list has only one element, or has at least two elements that are in sorted order, terminate with the original list by passing the tail of the list in the ‘Left’ branch
  • if the list has at least two elements that are out of sorted order, swap them and recurse using the ‘Right’ branch

And with that in place, we can use an apomorphism to put it to work:

knockback :: Ord a => List a -> List a
knockback = apo coalg . project where
  coalg NilF        = NilF
  coalg (ConsF x l) = case project l of
    NilF          -> ConsF x (Left l)
    ConsF h t
      | x <= h    -> ConsF x (Left l)
      | otherwise -> ConsF h (Right (ConsF x t))

Check out how it works on our original list, slotting the leading 3 in front of the 4 as required. I’ll use a regular list here for readability:

> let test = [3, 1, 1, 2, 4, 3, 5, 1, 6, 2, 1]
> knockbackL test
[1, 1, 2, 3, 4, 3, 5, 1, 6, 2, 1]

Now to implement insertion sort we just want to do this repeatedly like in the animation above.

This isn’t something you’d likely notice at first glance, but check out the type of ‘knockback . embed’:

> :t knockback . embed
knockback . embed :: Ord a => ListF a (List a) -> List a

That’s an algebra in the ‘ListF a’ base functor, so we can drop it into cata:

insertionSort :: Ord a => List a -> List a
insertionSort = cata (knockback . embed)

And voila, we have our sort.

If it’s not clear how the thing works, you can visualize the whole process as working from the back of the list, knocking back unsorted elements and recursing towards the front like so:

[]
[1]
[2, 1] -> [1, 2]
[6, 1, 2] -> [1, 2, 6]
[1, 1, 2, 6]
[5, 1, 1, 2, 6] -> [1, 1, 2, 5, 6]
[3, 1, 1, 2, 5, 6] -> [1, 1, 2, 3, 5, 6]
[4, 1, 1, 2, 3, 5, 6] -> [1, 1, 2, 3, 4, 5, 6]
[2, 1, 1, 2, 3, 4, 5, 6] -> [1, 1, 2, 2, 3, 4, 5, 6]
[1, 1, 1, 2, 2, 3, 4, 5, 6]
[1, 1, 1, 1, 2, 2, 3, 4, 5, 6]
[3, 1, 1, 1, 1, 2, 2, 3, 4, 5, 6] -> [1, 1, 1, 1, 2, 2, 3, 3, 4, 5, 6]
[1, 1, 1, 1, 2, 2, 3, 3, 4, 5, 6]

Wrapping Up

And that’s it! If you’re unlucky you may be sorting asymptotically worse than if you had used mergesort. But at least you’re doing it with style.

The ‘mapHead’ and ‘cat’ examples come from the unreadable Vene and Uustalu paper that first described apomorphisms. The insertion sort example comes from Tim Williams’s excellent recursion schemes talk.

As always, I’ve dumped the code for this article into a gist.

Addendum: Using Regular Lists

You’ll note that the ‘fromList’ and ‘knockbackL’ functions above operate on regular Haskell lists. The short of it is that it’s easy to do this; recursion-schemes defines a data family called ‘Prim’ that basically endows lists with base functor constructors of their own. You just need to use ‘Nil’ in place of ‘[]’ and ‘Cons’ in place of ‘(:)’.

Here’s insertion sort implemented in the same way, but for regular lists:

knockbackL :: Ord a => [a] -> [a]
knockbackL = apo coalg . project where
  coalg Nil        = Nil
  coalg (Cons x l) = case project l of
    Nil           -> Cons x (Left l)
    Cons h t
      | x <= h    -> Cons x (Left l)
      | otherwise -> Cons h (Right (Cons x t))

insertionSortL :: Ord a => [a] -> [a]
insertionSortL = cata (knockbackL . embed)

Yo Dawg We Heard You Like Derivatives

I noticed this article by Tom Ellis today that provides an excellent ‘demystified’ introduction to automatic differentiation. His exposition is exceptionally clear and simple.

Hopefully not in the spirit of re-mystifying things too much, I wanted to demonstrate that this kind of forward-mode automatic differentiation can be implemented using a catamorphism, which cleans up the various let statements found in Tom’s version (at the expense of slightly more pattern matching).

Let me first duplicate his setup using the standard recursion scheme machinery:

{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE LambdaCase #-}

import Data.Functor.Foldable

data ExprF r =
    VarF
  | ZeroF
  | OneF
  | NegateF r
  | SumF r r
  | ProductF r r
  | ExpF r
  deriving (Show, Functor)

type Expr = Fix ExprF

Since my expression type uses a fixed-point wrapper I’ll define my own embedded language terms to get around it:

var :: Expr
var = Fix VarF

zero :: Expr
zero = Fix ZeroF

one :: Expr
one = Fix OneF

neg :: Expr -> Expr
neg x = Fix (NegateF x)

add :: Expr -> Expr -> Expr
add a b = Fix (SumF a b)

prod :: Expr -> Expr -> Expr
prod a b = Fix (ProductF a b)

e :: Expr -> Expr
e x = Fix (ExpF x)

To implement a corresponding eval function we can use a catamorphism:

eval :: Double -> Expr -> Double
eval x = cata $ \case
  VarF         -> x
  ZeroF        -> 0
  OneF         -> 1
  NegateF a    -> negate a
  SumF a b     -> a + b
  ProductF a b -> a * b
  ExpF a       -> exp a

Very clear. We just match things mechanically.

Now, symbolic differentiation. If you refer to the original diff function you’ll notice that in cases like Product or Exp there are uses of both an original expression and also its derivative. This can be captured simply by a paramorphism:

diff :: Expr -> Expr
diff = para $ \case
  VarF                     -> one
  ZeroF                    -> zero
  OneF                     -> zero
  NegateF (_, x')          -> neg x'
  SumF (_, x') (_, y')     -> add x' y'
  ProductF (x, x') (y, y') -> add (prod x y') (prod x' y)
  ExpF (x, x')             -> prod (e x) x'

Here the primes indicate derivatives in the usual fashion, and the standard rules of differentiation are self-explanatory.

For automatic differentiation we just do sort of the same thing, except we’re also also going to lug around the evaluated function value itself at each point and evaluate to doubles instead of other expressions.

It’s worth noting here: why doubles? Because the expression type that we’ve defined has no notion of sharing, and thus the expressions will blow up à la diff (to see what I mean, try printing the analogue of diff bigExpression in GHCi). This could probably be mitigated by incorporating sharing into the embedded language in some way, but that’s a topic for another post.

Anyway, a catamorphism will do the trick:

ad :: Double -> Expr -> (Double, Double)
ad x = cata $ \case
  VarF                     -> (x, 1)
  ZeroF                    -> (0, 0)
  OneF                     -> (1, 0)
  NegateF (x, x')          -> (negate x, negate x')
  SumF (x, x') (y, y')     -> (x + y, x' + y')
  ProductF (x, x') (y, y') -> (x * y, x * y' + x' * y)
  ExpF (x, x')             -> (exp x, exp x * x')

Take a look at the pairs to the right of the pattern matches; the first element in each is just the corresponding term from eval, and the second is just the corresponding term from diff (made ‘Double’-friendly). The catamorphism gives us access to all the terms we need, and we can avoid a lot of work on the right-hand side by doing some more pattern matching on the left.

Some sanity checks to make sure that these functions match up with Tom’s:

*Main> map (snd . (`ad` testSmall)) [0.0009, 1.0, 1.0001]
[0.12254834896191881,1.0,1.0003000600100016]
*Main> map (snd . (`ad` testBig)) [0.00009, 1.0, 1.00001]
[3.2478565715996756e-6,1.0,1.0100754777229357]

UPDATE:

I had originally defined ad using a paramorphism but noticed that we can get by just fine with cata.

A Tour of Some Useful Recursive Types

(This article is also published at Medium)

I’m presently at NIPS and so felt like writing about some appropriate machine learning topic, but along the way I wound up talking about parameterized recursive types, and here we are. Enjoy!

One starts to see common ‘shapes’ in algebraic data types after working with them for a while. Take the natural numbers and a standard linked list, for example:

data Natural =
    One
  | Succ Natural

data List a =
    Empty
  | Cons a (List a)

These are similar in some sense. There are some differences - a list has an additional type parameter, and each recursive point in the list is tagged with a value of that type - but the nature of the recursion in each is the same. There is a single recursive point wrapped up in a single constructor, plus a single base case.

Consider a recursive type that is parameterized by a functor with kind ‘* -> *’, such that the kind of the resulting type is something like ‘(* -> *) -> *’ or ‘(* -> *) -> * -> *’ or so on. It’s interesting to look at the ‘shapes’ of some useful types like this and see what kind of similarities and differences in recursive structure that we can find.

In this article we’ll look at three such recursive types: ‘Fix’, ‘Free’, and ‘Cofree’. I’ll demonstrate that each can be viewed as a kind of program parameterized by some underlying instruction set.

Fix

To start, let’s review the famous fixed-point type ‘Fix’. I’ve talked about it before, but will go into a bit more detail here.

{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE StandaloneDeriving #-}
{-# LANGUAGE UndecideableInstances #-}

newtype Fix f = Fix (f (Fix f))

deriving instance (Show (f (Fix f))) => Show (Fix f)

Note: I’ll omit interpreter output for examples throughout this article, but feel free to try the code yourself in GHCi. I’ll post some gists at the bottom. The above code block also contains some pragmas that you can ignore; they’re just there to help GHC derive some instances for us.

Anyway. ‘Fix’ is in some sense a template recursive structure. It relies on some underlying functor ‘f’ to define the scope of recursion that you can expect a value with type ‘Fix f’ to support. There is the degenerate constant case, for example, which supports no recursion:

data DegenerateF r = DegenerateF
  deriving (Functor, Show)

type Degenerate = Fix DegenerateF

degenerate :: Degenerate
degenerate = Fix DegenerateF

Then you have the case like the one below, where only an infinitely recursive value exists:

newtype InfiniteF r = InfiniteF r
  deriving (Functor, Show)

type Infinite = Fix InfiniteF

infinite :: Infinite
infinite = Fix (InfiniteF infinite)

But in practice you’ll have something in between; a type with at least one recursive point or ‘running’ case and also at least one base or ‘terminating’ case. Take the natural numbers, for example:

data NatF r =
    OneF
  | SuccF r
  deriving (Functor, Show)

type Nat = Fix NatF

one :: Nat
one = Fix OneF

succ :: Nat -> Nat
succ = Fix . SuccF

Here ‘NatF’ provides both a ‘running’ case - ‘SuccF’ - and a ‘terminating’ case in - ‘OneF’. ‘Fix’ just lets ‘NatF’ do whatever it wants, having no say of its own about termination. In fact, we could have defined ‘Fix’ like this:

data Program f = Running (f (Program f))

Indeed, you can think of ‘Fix’ as defining a program that runs until ‘f’ decides to terminate. In turn, you can think of ‘f’ as an instruction set for the program. The whole shebang of ‘Fix f’ may only terminate if ‘f’ contains a terminating instruction.

Here’s a simple set of instructions, for example:

data Instruction r =
    Increment r
  | Decrement r
  | Terminate
  deriving (Functor, Show)

increment :: Program Instruction -> Program Instruction
increment = Running . Increment

decrement :: Program Instruction -> Program Instruction
decrement = Running . Decrement

terminate :: Program Instruction
terminate = Running Terminate

And we can write a sort of stack-based program like so:

program :: Program Instruction
program =
    increment
  . increment
  . decrement
  $ terminate

Richness of ‘Fix’

It’s worthwhile to review two functions that are useful for working with ‘Fix’, unimaginatively named ‘fix’ and ‘unfix’:

fix :: f (Fix f) -> Fix f
fix = Fix

unfix :: Fix f -> f (Fix f)
unfix (Fix f) = f

You can think of them like so: ‘fix’ embeds a value of type ‘f’ into a recursive structure by adding a new layer of recursion, while ‘unfix’ projects a value of type ‘f’ out of a recursive structure by peeling back a layer of recursion.

This is a pretty rich recursive structure - we have a guarantee that we can always embed into or project out of something with type ‘Fix f’, no matter what ‘f’ is.

Free

Next up is ‘Free’, which is really just ‘Fix’ with some added structure. It is defined as follows:

data Free f a =
    Free (f (Free f a))
  | Pure a
  deriving Functor

deriving instance (Show a, Show (f (Free f a))) => Show (Free f a)

The ‘Free’ constructor has an analogous definition to the ‘Fix’ constructor, meaning we can use ‘Free’ to implement the same things we did previously. Here are the natural numbers redux, for example:

type NatFree = Free NatF

oneFree :: NatFree a
oneFree = Free OneF

succFree :: NatFree a -> NatFree a
succFree = Free . SuccF

There’s also another branch here called ‘Pure’, though, that just bluntly wraps a value of type ‘a’, and has nothing to do with the parameter ‘f’. This has an interesting consequence: it means that ‘Free’ can have an opinion of its own about termination, regardless about what ‘f’ might decree:

type NotSoInfinite = Free InfiniteF

notSoInfinite :: NotSoInfinite ()
notSoInfinite = Free (InfiniteF (Free (InfiniteF (Pure ()))))

(Note that here I’ve returned the value of type unit when terminating under the ‘Pure’ branch, but you could pick whatever else you’d like.)

You’ll recall that ‘InfiniteF’ provides no terminating instruction, and left to its own devices will just recurse endlessly.

So: instead of being forced to choose a branch of the underlying functor to recurse on, ‘Free’ can just bail out on a whim and return some value wrapped up in ‘Pure’. We could have defined the whole type like this:

data Program f a =
    Running (f (Program f a))
  | Terminated a
  deriving Functor

Again, it’s ‘Fix’ with more structure. It’s a program that runs until ‘f’ decides to terminate, or that terminates and returns a value of type ‘a’

As a quick illustration, take our simple stack-based instruction set again. We can define the following embedded language terms:

increment :: Program Instruction a -> Program Instruction a
increment = Running . Increment

decrement :: Program Instruction a -> Program Instruction a
decrement = Running . Decrement

terminate :: Program Instruction a
terminate = Running Terminate

sigkill :: Program f Int
sigkill = Terminated 1

So note that ‘sigkill’ is independent of whatever instruction set we’re working with. We can thus write another simple program like before, except this time have ‘sigkill’ terminate it:

program :: Program Instruction Int
program =
    increment
  . increment
  . decrement
  $ sigkill

Richness of ‘Free’

Try to define the equivalent versions of ‘fix’ and ‘unfix’ for ‘Free’. The equivalent to ‘fix’ is easy:

free :: f (Free f a) -> Free f a
free = Free

You’ll hit a wall, though, if you want to implement the (total) analogue to ‘unfix’. One wants a function of type ‘Free f a -> f (Free f a)’, but the existence of the ‘Pure’ branch makes this impossible to implement totally. In general there is not going to be an ‘f’ to pluck out:

unfree :: Free f a -> f (Free f a)
unfree (Free f) = f
unfree (Pure a) = error "kaboom"

The recursion provided by ‘Free’ is thus a little less rich than that provided by ‘Fix’. With ‘Fix’ one can always project a value out of its recursive structure - but that’s not the case with ‘Free’.

It’s well-known that ‘Free’ is monadic, and indeed it’s usually called the ‘free monad’. The namesake ‘free’ comes from an algebraic definition; roughly, a free ‘foo’ is a ‘foo’ that satisfies the minimum possible constraints to make it a ‘foo’, and nothing else. Check out the slides from Dan Piponi’s excellent talk from Bayhac a few years back for a deeper dive on algebraic freeness.

Cofree

‘Cofree’ is also like ‘Fix’, but again with some extra structure. It can be defined as follows:

data Cofree f a = Cofree a (f (Cofree f a))
  deriving Functor

deriving instance (Show a, Show (f (Cofree f a))) => Show (Cofree f a)

Again, part of the definition - the second field of the ‘Cofree’ constructor - looks just like ‘Fix’. So predictably we can do a redux-redux of the natural numbers using ‘Cofree’:

type NatCofree = Cofree NatF

oneCofree :: NatCofree ()
oneCofree = Cofree () OneF

succFree :: NatCofree () -> NatCofree ()
succFree f = Cofree () (SuccF f)

(Note that here I’ve again used unit to fill in the first field - you could of course choose whatever you’d like.)

This looks a lot like ‘Free’, and in fact it’s the categorical dual of ‘Free’. Whereas ‘Free’ is a sum type with two branches, ‘Cofree’ is a product type with two fields. In the case of ‘Free’, we could have a program that either runs an instruction from a set ‘f’, or terminates with a value having type ‘a’. In the case of ‘Cofree’, we have a program that runs an instruction from a set ‘f’ and returns a value of type ‘a’.

A ‘Free’ value thus contains at most one recursive point wrapping the value with type ‘a’, while a ‘Cofree’ value contains potentially infinite recursive points - each one of which is tagged with a value of type ‘a’.

Rolling with the ‘Program’ analogy, we could have written this alternate definition for ‘Cofree’:

data Program f a = Program {
    annotation :: a
  , running    :: f (Program f a)
  } deriving Show

A ‘Cofree’ value is thus a program in which every instruction is annotated with a value of type ‘a’. This means that, unlike ‘Free’, it can’t have its own opinion on termination. Like ‘Fix’, it has to let ‘f’ decide how to do that.

We’ll use the stack-based instruction set example to wrap up. Here we can annotate instructions with progress about how many instructions remain to execute. First our new embedded language terms:

increment :: Program Instruction Int -> Program Instruction Int
increment p = Program (remaining p) (Increment p)

decrement :: Program Instruction Int -> Program Instruction Int
decrement p = Program (remaining p) (Decrement p)

terminate :: Program Instruction Int
terminate = Program 0 Terminate

Notice that two of these terms use a helper function ‘remaining’ that counts the number of instructions left in the program. It’s defined as follows:

remaining :: Program Instruction Int -> Int
remaining = loop where
  loop (Program a f) = case f of
    Increment p -> succ (loop p)
    Decrement p -> succ (loop p)
    Terminate   -> succ a

And we can write our toy program like so:

program :: Program Instruction Int
program =
    increment
  . increment
  . decrement
  $ terminate

Evaluate it in GHCi to see what the resulting value looks like.

Richness of ‘Cofree’

If you try and implement the ‘fix’ and ‘unfix’ analogues for ‘Cofree’ you’ll rapidly infer that we have the opposite situation to ‘Free’ here. Implementing the ‘unfix’ analogue is easy:

uncofree :: Cofree f a -> f (Cofree f a)
uncofree (Cofree _ f) = f

But implementing a total function corresponding to ‘fix’ is impossible - we can’t just come up with something of arbitrary type ‘a’ to tag an instruction ‘f’ with, so, like before, we can’t do any better than define something partially:

cofree :: f (Cofree f a) -> Cofree f a
cofree f = Cofree (error "kaboom") f

Just as how ‘Free’ forms a monad, ‘Cofree’ forms a comonad. It’s thus known as the ‘cofree comonad’, though I can’t claim to really have any idea what the algebraic notion of ‘cofreeness’ captures, exactly.

Wrapping Up

So: ‘Fix’, ‘Free’, and ‘Cofree’ all share a similar sort of recursive structure that make them useful for encoding programs, given some instruction set. And while their definitions are similar, ‘Fix’ supports the richest recursion of the three in some sense - it can both ‘embed’ things into and ‘project’ things out of its recursive structure, while ‘Free’ supports only embedding and ‘Cofree’ supports only projecting.

This has a practical implication: it means one can’t make use of certain recursion schemes for ‘Free’ and ‘Cofree’ in the same way that one can for ‘Fix’. There do exist analogues, but they’re sort of out-of-scope for this post.

I haven’t actually mentioned any truly practical uses of ‘Free’ and ‘Cofree’ here, but they’re wonderful things to keep in your toolkit if you’re doing any work with embedded languages, and I’ll likely write more about them in the future. In the meantime, Dave Laing wrote an excellent series of posts on ‘Free’ and ‘Cofree’ that are more than worth reading. They go into much more interesting detail than I’ve done here - in particular he details a nice pairing that exists between ‘Free’ and ‘Cofree’ (also discussed by Dan Piponi), plus a whack of examples.

You can also find industrial-strength infrastructure for both ‘Free’ and ‘Cofree’ in Edward Kmett’s excellent free library, and for ‘Fix’ in recursion-schemes.

I’ve dumped the code for this article into a few gists. Here’s one of everything excluding the running ‘Program’ examples, and here are the corresponding ‘Program’ examples for the Fix, Free, and Cofree cases respectively.

Thanks to Fredrik Olsen for review and great feedback.

Sorting with Style

Merge sort is a famous comparison-based sorting algorithm that starts by first recursively dividing a collection of orderable elements into smaller subcollections, and then finishes by recursively sorting and merging the smaller subcollections together to reconstruct the (now sorted) original.

A clear implementation of mergesort should by definition be as faithful to that high-level description as possible. We can get pretty close to that using the whole recursion schemes business that I’ve talked about in the past. Near the end of that article I briefly mentioned the idea of implementing mergesort via a hylomorphism, and here I just want to elaborate on that a little.

Start with a collection of orderable elements. We can divide the collection into a bunch of smaller collections by using a binary tree:

{-# LANGUAGE DeriveFunctor #-}

import Data.Functor.Foldable (hylo)
import Data.List.Ordered (merge)

data Tree a r =
    Empty
  | Leaf a
  | Node r r
  deriving Functor

The idea is that each node in the tree holds two subtrees, each of which contains half of the remaining elements. We can build a tree like this from a collection - say, a basic Haskell list. The following unfolder function defines what part of a tree to build for any corresponding part of a list:

unfolder []  = Empty
unfolder [x] = Leaf x
unfolder xs  = Node l r where
  (l, r) = splitAt (length xs `div` 2) xs

On the other hand, we can also collapse an existing tree back into a list. The following folder function defines how to collapse any given part of a tree into the corresponding part of a list; again we just pattern match on whatever part of the tree we’re looking at, and construct the complementary list:

folder Empty      = []
folder (Leaf x)   = [x]
folder (Node l r) = merge l r

Now to sort a list we can just glue these instructions together using a hylomorphism:

mergesort :: Ord a => [a] -> [a]
mergesort = hylo folder unfolder

And it works just like you’d expect:

> mergesort [1,10,3,4,5]
[1,3,4,5,10]
> mergesort "aloha"
"aahlo"
> mergesort [True, False, False, True, False]
[False, False, False, True, True]

Pretty concise!

The code is eminently clean and faithful to the high-level algorithm description: first recursively divide a collection into smaller subcollections - via a binary tree and unfolder - and then recursively sort and merge the subcollections to reconstruct the (now sorted) original one - via folder.

A version of this post originally appeared on the Fugue blog.