Higher Order

Philosophy and functional programming.

A Scala Comonad Tutorial, Part 1

In chapter 11 of our book, we talk about monads in Scala. This finally names a pattern that the reader has seen throughout the book and gives it a formal structure. We also give some intuition for what it means for something to be a monad. Once you have this concept, you start recognizing it everywhere in the daily business of programming.

Today I want to talk about comonads, which are the dual of monads. The utility of comonads in everyday life is not quite as immediately obvious as that of monads, but they definitely come in handy sometimes. Particularly in applications like image processing and scientific computation.

A monad, upside-down

Let’s remind ourselves of what a monad is. A monad is a functor, which just means it has a map method:

1
2
3
trait Functor[F[_]] {
  def map[A,B](x: F[A])(f: A => B): F[B]
}

This has to satisfy the law that map(x)(a => a) == x, i.e. that mapping the identity function over our functor is a no-op.

Monads

A monad is a functor M equipped with two additional polymorphic functions; One from A to M[A] and one from M[M[A]] to M[A].

1
2
3
4
trait Monad[M[_]] extends Functor[M] {
  def unit[A](a: A): M[A]
  def join[A](mma: M[M[A]]): M[A]
}

Recall that join has to satisfy associativity, and unit has to be an identity for join.

In Scala a monad is often stated in terms of flatMap, which is map followed by join. But I find this formulation easier to explain.

Every monad has the above operations, the so-called proper morphisms of a monad, and may also bring to the table some nonproper morphisms which give the specific monad some additional capabilities.

Reader monad

For example, the Reader monad brings the ability to ask for a value:

1
2
3
case class Reader[R,A](run: R => A)

def ask[R]: Reader[R,R] = Reader(r => r)

The meaning of join in the reader monad is to pass the context of type R from the outer scope to the inner scope:

1
2
def join[R,A](r: Reader[R,Reader[R,A]]) =
  Reader((c:R) => r.run(c).run(c))

Writer monad

The Writer monad has the ability to write a value on the side:

1
2
3
case class Writer[W,A](value: A, log: W)

def tell[W,A](w: W): Writer[W,Unit] = Writer((), w)

The meaning of join in the writer monad is to concatenate the “log” of written values using the monoid for W (this is using the Monoid class from Scalaz):

1
2
def join[W:Monoid,A](w: Writer[W,Writer[W,A]]) =
  Writer(w.value.value, Monoid[W].append(w.log, w.value.log))

And the meaning of unit is to write the “empty” log:

1
def unit[W:Monoid,A](a: A) = Writer(a, Monoid[W].zero)

State monad

The State monad can both get and set the state:

1
2
def get[S]: State[S,S] = State(s => (s, s))
def put[S](s: S): State[S,Unit] = State(_ => ((), s))

The meaning of join in the state monad is to give the outer action an opportunity to get and put the state, then do the same for the inner action, making sure any subsequent actions see the changes made by previous ones.

1
2
3
4
5
6
7
case class State[S,A](run: S => (A, S))

def join[S,A](v1: State[S,State[S,A]]): State[S,A] =
  State(s1 => {
    val (v2, s2) = v1.run(s1)
    v2.run(s2)
  })

Option monad

The Option monad can terminate without an answer:

1
def none[A]: Option[A] = None

That’s enough examples of monads. Let’s now turn to comonads.

Comonads

A comonad is the same thing as a monad, only backwards:

1
2
3
4
trait Comonad[W[_]] extends Functor[W] {
  def counit[A](w: W[A]): A
  def duplicate[A](wa: W[A]): W[W[A]]
}

Note that counit is pronounced “co-unit”, not “cow-knit”. It’s also sometimes called extract because it allows you to get a value of type A out of a W[A]. While with monads you can generally only put values in and not get them out, with comonads you can generally only get them out and not put them in.

And instead of being able to join two levels of a monad into one, we can duplicate one level of a comonad into two.

Kind of weird, right? This also has to obey some laws. We’ll get to those later on, but let’s first look at some actual comonads.

The identity comonad

A simple and obvious comonad is the dumb wrapper (the identity comonad):

1
2
3
4
5
case class Id[A](a: A) {
  def map[B](f: A => B): Id[B] = Id(f(a))
  def counit: A = a
  def duplicate: Id[Id[A]] = Id(this)
}

This one is also the identity monad. Id doesn’t have any functionality other than the proper morphisms of the (co)monad and is therefore not terribly interesting. We can get the value out with our counit, and we can vacuously duplicate by decorating our existing Id with another layer.

The reader comonad

There’s a comonad with the same capabilities as the reader monad, namely that it allows us to ask for a value:

1
2
3
4
5
case class Coreader[R,A](extract: A, ask: R) {
  def map[B](f: A => B): Coreader[R,B] = Coreader(f(extract), ask)
  def duplicate: Coreader[R, Coreader[R, A]] =
    Coreader(this, ask)
}

It should be obvious how we can give a Comonad instance for this (I’m using the Kind Projector compiler plugin to make the syntax look a little nicer than Vanilla Scala):

1
2
3
4
5
6
def coreaderComonad[R]: Comonad[Coreader[R,?]] =
  new Comonad[Coreader[R,?]] {
    def map[A,B](c: Coreader[R,A])(f: A => B) = c map f
    def counit[A](c: Coreader[R,A]) = c.extract
    def duplicate(c: Coreader[R,A]) = c.duplicate
  }

Arguably, this is much more straightforward in Scala than the reader monad. In the reader monad, the ask function is the identity function. That’s saying “once the R value is available, return it to me”, making it available to subsequent map and flatMap operations. But in Coreader, we don’t have to pretend to have an R value. It’s just right there and we can look at it.

So Coreader just wraps up some value of type A together with some additional context of type R. Why is it important that this is a comonad? What is the meaning of duplicate here?

To see the meaning of duplicate, notice that it puts the whole Coreader in the value slot (in the extract portion). So any subsequent extract or map operation will be able to observe both the value of type A and the context of type R. We can think of this as passing the context along to those subsequent operations, which is analogous to what the reader monad does.

In fact, just like map followed by join is usually expressed as flatMap, by the same token duplicate followed by map is usually expressed as a single operation, extend:

1
2
3
4
5
case class Coreader[R,A](extract: A, ask: R) {
  ...
  def extend[B](f: Coreader[R,A] => B): Coreader[R,B] =
    duplicate map f
}

Notice that the type signature of extend looks like flatMap with the direction of f reversed. And just like we can chain operations in a monad using flatMap, we can chain operations in a comonad using extend. In Coreader, extend is making sure that f can use the context of type R to produce its B.

Chaining operations this way using flatMap in a monad is sometimes called Kleisli composition, and chaining operations using extend in a comonad is called coKleisli composition (or just Kleisli composition in a comonad).

The writer comonad

Just like the writer monad, the writer comonad can append to a log or running tally using a monoid. But instead of keeping the log always available to be appended to, it uses the same trick as the reader monad by building up an operation that gets executed once a log becomes available:

1
2
3
4
5
6
7
8
case class Cowriter[W:Monoid,A](tell: W => A) {
  def map[B](f: A => B): Cowriter[W,B] = Cowriter(tell andThen f)
  def extract = tell(Monoid[W].zero)
  def duplicate: Cowriter[W, Cowriter[W, A]] =
    Cowriter(w1 => Cowriter(w2 => tell(Monoid[W].append(w1, w2))))
  def extend[B](f: Cowriter[W,A] => B): Cowriter[W,B] =
    duplicate map f
}

Note that duplicate returns a whole Cowriter from its constructed run function, so the meaning is that subsequent operations (composed via map or extend) have access to exactly one tell function, which appends to the existing log or tally.

Comonad laws

The comonad laws are analogous to the monad laws:

  1. Left idenity: wa.duplicate.extract == wa
  2. Right identity: wa.extend(extract) == wa
  3. Associativity: wa.duplicate.duplicate == wa.extend(duplicate)

It can be hard to get an intuition for what these laws mean, but in short they mean that (co)Kleisli composition in a comonad should be associative and that extract (a.k.a. counit) should be an identity for it.

Intuitively, both the monad and comonad laws mean that we should be able to write our programs top-down or bottom-up, or any combination thereof, and have that mean the same thing regardless.

Next time…

In part 2 we’ll look at some more examples of comonads and follow some of the deeper connections. Like what’s the relationship between the reader monad and the reader comonad, or the writer monad and the writer comonad? They’re not identical, but they seem to do all the same things. Are they equivalent? Isomorphic? Something else?

Easy Performance Wins With Scalaz

I’ve found that if I’m using scala.concurrent.Future in my code, I can get some really easy performance gains by just switching to scalaz.concurrent.Task instead, particularly if I’m chaining them with map or flatMap calls, or with for comprehensions.

Jumping into thread pools

Every Future is basically some work that needs to be submitted to a thread pool. When you call futureA.flatMap(a => futureB), both Future[A] and Future[B] need to be submitted to the thread pool, even though they are not running concurrently and could theoretically run on the same thread. This context switching takes a bit of time.

Jumping on trampolines

With scalaz.concurrent.Task you have a bit more control over when you submit work to a thread pool and when you actually want to continue on the thread that is already executing a Task. When you say taskA.flatMap(a => taskB), the taskB will by default just continue running on the same thread that was already executing taskA. If you explicitly want to dip into the thread pool, you have to say so with Task.fork.

This works since a Task is not a concurrently running computation. It’s a description of a computation—a sequential list of instructions that may include instructions to submit some of the work to thread pools. The work is actually executed by a tight loop in Task’s run method. This loop is called a trampoline since every step in the Task (that is, every subtask) returns control to this loop.

Jumping on a trampoline is a lot faster than jumping into a thread pool, so whenever we’re composing Futures with map and flatMap, we can just switch to Task and make our code faster.

Making fewer jumps

But sometimes we know that we want to continue on the same thread and we don’t want to spend the time jumping on a trampoline at every step. To demonstrate this, I’ll use the Ackermann function. This is not necessarily a good use case for Future but it shows the difference well.

1
2
3
4
5
6
// Only defined for positive `m` and `n`
def ackermann(m: Int, n: Int): Int = (m, n) match {
  case (0, _) => n + 1
  case (m, 0) => ackermann(m - 1, 1)
  case (m, n) => ackermann(m - 1, ackermann(m, n - 1))
}

This function is supposed to terminate for all positive m and n, but if they are modestly large, this recursive definition overflows the stack. We could use futures to alleviate this, jumping into a thread pool instead of making a stack frame at each step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import scala.concurrent.Future
import scala.concurrent.ExecutionContext.Implicits.global

def ackermannF(m: Int, n: Int): Future[Int] = {
  (m, n) match {
    case (0, _) => Future(n + 1)
    case (m, 0) => Future(ackermannF(m - 1, 1)).flatMap(identity)
    case (m, n) => for {
      x <- Future(ackermannF(m, n - 1))
      y <- x
      z <- Future(ackermannF(m - 1, y))
      r <- z
    } yield r
  }
}

Since there’s no actual concurrency going on here, we can make this instantly faster by switching to Task instead, using a trampoline instead of a thread pool:

1
2
3
4
5
6
7
8
9
10
11
import scalaz.concurrent.Task, Task._

def ackermannT(m: Int, n: Int): Task[Int] = {
  (m, n) match {
    case (0, _) => now(n + 1)
    case (m, 0) => suspend(ackermannT(m - 1, 1))
    case (m, n) =>
      suspend(ackermannT(m, n - 1)).flatMap { x =>
      suspend(ackermannT(m - 1, x)) }
  }
}

But even here, we’re making too many jumps back to the trampoline with suspend. We don’t actually need to suspend and return control to the trampoline at each step. We only need to do it enough times to avoid overflowing the stack. Let’s say we know how large our stack can grow:

1
val maxStack = 512

We can then keep track of how many recursive calls we’ve made, and jump on the trampoline only when we need to:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def ackermannO(m: Int, n: Int): Task[Int] = {
  def step(m: Int, n: Int, stack: Int): Task[Int] =
    if (stack >= maxStack)
      suspend(ackermannO(m, n))
    else go(m, n, stack + 1)
  def go(m: Int, n: Int, stack: Int): Task[Int] =
    (m, n) match {
      case (0, _) => now(n + 1)
      case (m, 0) => step(m - 1, 1, stack)
      case (m, n) => for {
        internalRec <- step(m, n - 1, stack)
        result      <- step(m - 1, internalRec, stack)
      } yield result
    }
  go(m, n, 0)
}

How fast is it?

I did some comparisons using Caliper and made this pretty graph for you:

The horizontal axis is the number of steps, and the vertical axis is the mean time that number of steps took over a few thousand runs.

This graph shows that Task is slightly faster than Future for submitting to thread pools (blue and yellow lines marked Future and Task respectively) only for very small tasks; up to about when you get to 50 steps, when (on my Macbook) both futures and tasks cross the 30 μs threshold. This difference is probably due to the fact that a Future is a running computation while a Task is partially constructed up front and explicitly run later. So with the Future the threads might just be waiting for more work. The overhead of Task.run seems to catch up with us at around 50 steps.

But honestly the difference between these two lines is not something I would care about in a real application, because if we jump on the trampoline instead of submitting to a thread pool (green line marked Trampoline), things are between one and two orders of magnitude faster.

If we only jump on the trampoline when we really need it (red line marked Optimized), we can gain another order of magnitude. Compared to the original naïve version that always goes to the thread pool, this is now the difference between running your program on a 10 MHz machine and running it on a 1 GHz machine.

If we measure without using any Task/Future at all, the line tracks the Optimized red line pretty closely then shoots to infinity around 1000 (or however many frames fit in your stack space) because the program crashes at that point.

In summary, if we’re smart about trampolines vs thread pools, Future vs Task, and optimize for our stack size, we can go from milliseconds to microseconds with not very much effort. Or seconds to milliseconds, or weeks to hours, as the case may be.

Pulling Out of Functional Programming in Java

After giving it a lot of thought I have come to the conclusion that I won’t be involved in “Functional Programming in Java”. There are many reasons, including that I just don’t think I can spend the time to make this a good book. Looking at all the things I have scheduled for the rest of the year, I can’t find the time to work on it.

More depressingly, the thought of spending a year or more writing another book makes me anxious. I know from experience that making a book (at least a good one) is really hard and takes up a lot of mental energy. Maybe one day there will be a book that I will want to forego a year of evenings and weekends for, but today is not that day.

Originally, the content of FPiJ was going to be based on “Functional Programming in Scala”, but after some discussion with the publisher I think we were all beginning to see that this book deserved its own original content specifically on an FP style in Java.

I really do think such a thing deserves its own original book. Since Java is strictly less suitable for functional programming than Scala is, a book on FP in Java will have to lay a lot of groundwork that we didn’t have to do with FPiS, and it will have to forego a lot of the more advanced topics.

I wish the author of that book, and the publisher, all the best and I hope they do well. I’m sorry to let you all down, but I’m sure this is for the best.

A Companion Booklet to FPiS

Our book, Functional Programming in Scala, relies heavily on exercises. Hints and answers for those exercises are not actually in the book, but are freely available on GitHub under a permissive MIT license. Likewise, we have written chapter notes that we reference throughout the book and made them available as a community-editable wiki.

Naturally, readers get the most out of this book by downloading the source code from GitHub and doing the exercises as they read. But a number of readers have made the comment that they wish they could have the hints and answers with them when they read the book on the train to and from work, on a long flight, or wherever there is no internet connection or it’s not convenient to use a computer.

It is of course entirely possible to print out the chapter notes, hints, and exercises, and take them with you either as a hardcopy or as a PDF to use on a phone or tablet. Well, I’ve taken the liberty of doing that work for you. I wrote a little script to concatenate all the chapter notes, errata, hints, and answers into Markdown files and then just printed them all to a single document, tweaking a few things here and there. I’m calling this A companion booklet to “Functional Programming in Scala”. It is released under the same MIT license as the content it aggregates. This means you’re free to copy it, distribute or sell it, or basically do whatever you want with it. The Markdown source of the manuscript is available on my GitHub.

I have made an electronic version of this booklet available on Leanpub as as a PDF, ePub, and Kindle file on a pay-what-you-want basis (minimum of $0.99). It has full color syntax highlighting throughout and a few little tweaks to make it format nicely. The paper size is standard US Letter which makes it easy to print on most color printers. If you choose to buy the booklet from Leanpub, they get a small fee, a small portion of the proceeds goes to support Liberty in North Korea, and the rest goes to yours truly. You’ll also get updates when those inevitably happen.

If you don’t care about any of that, you can grab the PDF from here with my compliments.

The booklet is also available from CreateSpace or Amazon as a full color printed paperback. This comes in a nicely bound glossy cover for just a little more than the price of printing (they print it on demand for you). I’ve ordered one and I’m really happy with the quality of this print:

The print version is of course under the same permissive license, so you can make copies of it, make derivative works, or do whatever you want. It’s important to note that with this booklet I’ve not done anything other than design a little cover and then literally print out this freely available content and upload it to Amazon, which anybody could have done (and you still can if you want).

I hope this makes Functional Programming in Scala more useful and more enjoyable for more people.

A Better Reading List With Mathematica

Like a lot of people, I keep a list of books I want to read. And because there are a great many more books that interest me than I can possibly read in my lifetime, this list has become quite long.

In the olden days of brick-and-mortar bookstores and libraries, I would discover books to read by browsing shelves and picking up what looked interesting at the time. I might even find something that I knew was on my list. “Oh, I’ve been meaning to read that!”

The Internet changes this dynamic dramatically. It makes it much easier for me to discover books that interest me, and also to access any book that I might want to read, instantly, anywhere. At any given time, I have a couple of books that I’m “currently reading”, and when I finish one I can start another immediately. I use Goodreads to manage my to-read list, and it’s easy for me to scroll through the list and pick out my next book.

But again, this list is very long. So I wanted a good way to filter out books I will really never read, and sort it such that the most “important” books in some sense show up first. Then every time I need a new book I could take the first one from the list and make a binary decision: either “I will read this right now”, or “I am never reading this”. In the latter case, if a book interests me enough at a later time, I’m sure it will find its way back onto my list.

The problem then is to find a good metric by which to rank books. Goodreads lets users rank books with a star-rating from 1 to 5, and presents you with an average rating by which you can sort the list. The problem is that a lot of books that interest me have only one rating and it’s 5 stars, giving the book an “average” of 5.0. So if I go with that method I will be perpetually reading obscure books that one other person has read and loved. This is not necessarily a bad thing, but I do want to branch out a bit.

Another possibility is to use the number of ratings to calculate a confidence interval for the average rating. For example, using the Wilson score I could find an upper and lower bound s1 and s2 (higher and lower than the average rating, respectively) that will let me say “I am 95% sure that any random sample of readers of an equal size would give an average rating between s1 and s2.” I could then sort the list by the lower bound s1.

But this method is dissatisfactory for a number of reasons. First, it’s not clear how to fit star ratings to such a measure. If we do the naive thing and count a 1-star rating as 1/5 and a 5 star rating as 5/5, that counts a 1-star rating as a “partial success” in some sense. We could discard 1-stars as 0, and count 2, 3, 4, and 5 stars as 25%, 50%, 75%, and 100%, respectively.

But even if we did make it fit somehow, it turns out that if you take any moderately popular book on Goodreads at random, it will have an average rating somewhere close to 4. I could manufacture a prior based on this knowledge and use that instead of the normal distribution or the Jeffreys prior in the confidence interval, but that would still not be a very good ranking because reader review metascores are meaningless.

In the article “Reader review metascores are meaningless”, Stephanie Shun suggests using the percentage of 5-star ratings as the relevant metric rather than the average rating. This is a good suggestion, since even a single 5-star rating carries a lot of actionable information whereas an average rating close to 4.0 carries very little.

I can then use the Wilson score directly, counting a 5-star rating as a successful trial and any other rating as a failed one. I can then just use the normal distribution instead of working with an artisanally curated prior.

Mathematica makes it easy to generate the Wilson score. Here, pos is the number of positive trials (number of 5-star ratings), n is the number of total ratings, and confidence is the desired confidence percentage. I’m taking the lower bound of the confidence interval to get my score.

1
2
3
4
5
6
7
8
9
10
Wilson[pos_, n_, confidence_] := 
  If[n == 0 || n < pos, 0, 
    With[{
      z = Quantile[NormalDistribution[], 1 - (1 - confidence)/2], 
      p = pos/n
    },
    (-(z Sqrt[(z^2/(4 n) + (1 - p) p)/n]) + z^2/(2 n) + p) /
      (z^2/n + 1)
    ]
  ]

Now I just need to get the book data from Goodreads. Fortunately, it has a pretty rich API. I just need a developer key, which anyone can get for free.

For example, to get the ratings for a given book id, we can use their XML api for books and pattern match on the result to get the ratings by score:

1
2
3
4
5
6
7
8
9
10
Ratings[id_] := Ratings[id] = 
  With[{
    raw = Cases[Import[ToString[StringForm[
        "http://www.goodreads.com/book/show.xml?key=``&id=``", key, id]]],
      XMLElement["rating_dist", _, {d_}] -> d, Infinity]
  }, 
  With[{
    data = StringSplit[#, ":"] & /@ StringSplit[raw, "|"]
  }, Block[{}, Pause[1];
  FromDigits /@ Association[Rule @@@ First[data]]]]];

Here, key is my Goodreads developer API key, defined elsewhere. I put a Pause[1] in the call since Goodreads throttles API calls so you can’t make more than one call per second to each API endpoint. I’m also memoizing the result, by assigning to Ratings[id] in the global environment.

Ratings will give us an association list with the number of ratings for each score from 1 to 5, together with the total. For example, for the first book in their catalogue, Harry Potter and the Half-Blood Prince, here are the scores:

1
2
3
4
5
6
7
8
In[1]:= Ratings[1]

Out[1]= <|"5"     ->  782740,
          "4"     ->  352725,
          "3"     ->  111068,
          "2"     ->   17693,
          "1"     ->    5289,
          "total" -> 1269515|>

Sweet. Let’s see how Harry Potter #6 would score with our rating:

1
2
3
In[2]:= Wilson[Out[1]["5"], Out[1]["total"], 0.95]

Out[2]= 0.61572

So Wilson is 95% confident that in any random sample of about 1.2 million Harry Potter readers, at least 61.572% of them would give The Half-Blood Prince a 5-star rating. That turns out to be a pretty high score, so if this book were on my list (which it isn’t), it would feature pretty close to the very top.

But now the score for a relatively obscure title is too low. For example, the lower bound of the 95% confidence interval for a single-rating 5-star book will be 0.206549, which will be towards the bottom of any list. This means I would never get to any of the obscure books on my reading list, since they would be edged out by moderately popular books with an average rating close to 4.0.

See, if I’ve picked a book that I want to read, I’d consider five ratings that are all five stars a much stronger signal than the fact that people who like Harry Potter enough to read 5 previous books loved the 6th one. Currently the 5*5 book will score 57%, a bit weaker than the Potter book’s 62%.

I can fix this by lowering the confidence level. Because honestly, I don’t need a high confidence in the ranking. I’d rather err on the side of picking up a deservedly obscure book than to miss out on a rare gem. Experimenting with this a bit, I find that a confidence around 80% raises the obscure books enough to give me an interesting mix. For example, a 5*5 book gets a 75% rank, while the Harry Potter one stays at 62%.

I’m going to call that the Rúnar rank of a given book. The Rúnar rank is defined as the lower bound of the 1-1/q Wilson confidence interval for scoring in the qth q-quantile. In the special case of Goodreads ratings, it’s the 80% confidence for a 5-star rating.

1
2
RunarRank[id_] := With[{ratings = Ratings[id]},
  Wilson[ratings["5"], ratings["total"], .8]]]

Unfortunately, there’s no way to get the rank of all the books in my reading list in one call to the Goodreads API. And when I asked them about it they basically said “you can’t do that”, so I’m assuming that feature will not be added any time soon. So I’ll have to get the reading list first, then call RunarRank for each book’s id. In Goodreads, books are managed by “shelves”, and the API allows getting the contents of a given shelf, 200 books at a time:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
GetShelf[user_, shelf_] :=
  With[{raw = 
    Cases[Import[
      ToString[
       StringForm[
        "http://www.goodreads.com/review/list.xml?v=2&key=``&id=``&\
shelf=``&sort=avg_rating&order=d&per_page=200", key, user, shelf]]], 
     XMLElement[
       "book", _, {___, XMLElement["id", ___, {id_}], ___, 
        XMLElement["title", _, {title_}], ___, 
        XMLElement["average_rating", _, {avg_}], ___, 
        XMLElement[
         "authors", _, {XMLElement[
           "author", _, {___, 
            XMLElement[
             "name", _, {author_}], ___}], ___}], ___}] -> {"id" -> 
        id, "title" -> title, "author" -> author, "avg" -> avg}, 
     Infinity]}, Association /@ raw]

I’m doing a bunch of XML pattern matching here to get the id, title, average_rating, and first author of each book. Then I put that in an association list. I’m getting only the top-200 books on the list by average rating (which currently is about half my list).

With that in hand, I can get the contents of my “to-read” shelf with GetShelf[runar, "to-read"], where runar is my Goodreads user id. And given that, I can call RunarRank on each book on the shelf, then sort the result by that rank:

1
2
3
RunarSort[shelf_] :=
 Sort[Map[Function[x, Append[x, "rank" -> RunarRank[x["id"]]]], 
   shelf], #1["rank"] > #2["rank"] &]

To get the ranked reading list of any user:

1
ReadingList[user_] := RunarSort[GetShelf[user, "to-read"]]

And to print it out nicely:

1
2
3
4
5
6
7
Gridify[books_] := 
 Grid[Flatten[
   Cases[books, 
    b_ -> {
      {b["id"], b["title"], UnitConvert[b["rank"], "Percent"]},
      {"", b["author"], b["avg"]}, {"", "", ""} }], 1],
   Alignment -> Left]

Now I can get, say, the first 10 books on my improved reading list:

1
Gridify[ReadingList[runar][[1 ;; 10]]]
9934419 Kvæðasafn 75.2743%
Snorri Hjartarson 5.00
 
17278 The Feynman Lectures on Physics Vol 1 67.2231%
Richard P. Feynman 4.58
 
640909 The Knowing Animal: A Philosophical Inquiry Into Knowledge and Truth 64.6221%
Raymond Tallis 5.00
 
640913 The Hand: A Philosophical Inquiry Into Human Being 64.6221%
Raymond Tallis 5.00
 
4050770 Volition As Cognitive Self Regulation 62.231%
Harry Binswanger 4.86
 
8664353 Unbroken: A World War II Story of Survival, Resilience, and Redemption 60.9849%
Laura Hillenbrand 4.45
 
13413455 Software Foundations 60.1596%
Benjamin C. Pierce 4.80
 
77523 Harry Potter and the Sorcerer’s Stone (Harry Potter #1) 59.1459%
J.K. Rowling 4.39
 
13539024 Free Market Revolution: How Ayn Rand’s Ideas Can End Big Government 59.1102%
Yaron Brook 4.48
 
1609224 The Law 58.767%
Frédéric Bastiat 4.40
 

I’m quite happy with that. Some very popular and well-loved books interspersed with obscure ones with exclusively (or almost exclusively) positive reviews. The most satisfying thing is that the rating carries a real meaning. It’s basically the relative likelihood that I will enjoy the book enough to rate it five stars.

I can test this ranking against books I’ve already read. Here’s the top of my “read” shelf, according to their Rúnar Rank:

17930467 The Fourth Phase of Water 68.0406%
Gerald H. Pollack 4.85
 
7687279 Nothing Less Than Victory: Decisive Wars and the Lessons of History 64.9297%
John David Lewis 4.67
 
43713 Structure and Interpretation of Computer Programs 62.0211%
Harold Abelson 4.47
 
7543507 Capitalism Unbound: The Incontestable Moral Case for Individual Rights 57.6085%
Andrew Bernstein 4.67
 
13542387 The DIM Hypothesis: Why the Lights of the West Are Going Out 55.3296%
Leonard Peikoff 4.37
 
5932 Twenty Love Poems and a Song of Despair 54.7205%
Pablo Neruda 4.36
 
18007564 The Martian 53.9136%
Andy Weir 4.36
 
24113 Gödel, Escher, Bach: An Eternal Golden Braid 53.5588%
Douglas R. Hofstadter 4.29
 
19312 The Brothers Lionheart 53.0952%
Astrid Lindgren 4.33
 
13541678 Functional Programming in Scala 52.6902%
Rúnar Bjarnason 4.54
 

That’s perfect. Those are definitely books I thouroughly enjoyed and would heartily recommend. Especially that last one.

Maximally Powerful, Minimally Useful

It’s well known that there is a trade-off in language and systems design between expressiveness and analyzability. That is, the more expressive a language or system is, the less we can reason about it, and vice versa. The more capable the system, the less comprehensible it is.

This principle is very widely applicable, and it’s a useful thing to keep in mind when designing languages and libraries. A practical implication of being aware of this principle is that we always make components exactly as expressive as necessary, but no more. This maximizes the ability of any downstream systems to reason about our components. And dually, for things that we receive or consume, we should require exactly as much analytic power as necessary, and no more. That maximizes the expressive freedom of the upstream components.

I find myself thinking about this principle a lot lately, and seeing it more or less everywhere I look. So I’m seeking a more general statement of it, if such a thing is possible. It seems that more generally than issues of expressivity/analyzability, a restriction at one semantic level translates to freedom and power at another semantic level.

What I want to do here is give a whole bunch of examples. Then we’ll see if we can come up with an integration for them all. This is all written as an exercise in thinking out loud and is not to be taken very seriously.

Examples from computer science

Context-free and regular grammars

In formal language theory, context-free grammars are more expressive than regular grammars. The former can describe strictly more sets of strings than the latter. On the other hand, it’s harder to reason about context-free grammars than regular ones. For example, we can decide whether two regular expressions are equal (they describe the same set of strings), but this is undecidable in general for context-free grammars.

Monads and applicative functors

If we know that an applicative functor is a monad, we gain some expressive power that we don’t get with just an applicative functor. Namely, a monad is an applicative functor with an additional capability: monadic join (or “bind”, or “flatMap”). That is, context-sensitivity, or the ability to bind variables in monadic expressions.

This power comes at a cost. Whereas we can always compose any two applicatives to form a composite applicative, two monads do not in general compose to form a monad. It may be the case that a given monad composes with any other monad, but we need some additional information about it in order to be able to conclude that it does.

Actors and futures

Futures have an algebraic theory, so we can reason about them algebraically. Namely, they form an applicative functor which means that two futures x and y make a composite future that does x and y in parallel. They also compose sequentially since they form a monad.

Actors on the other hand have no algebraic theory and afford no algebraic reasoning of this sort. They are “fire and forget”, so they could potentially do anything at all. This means that actor systems can do strictly more things in more ways than systems composed of futures, but our ability to reason about such systems is drastically diminished.

Typed and untyped programming

When we have an untyped function, it could receive any type of argument and produce any type of output. The implementation is totally unrestricted, so that gives us a great deal of expressive freedom. Such a function can potentially participate in a lot of different expressions that use the function in different ways.

A function of type Bool -> Bool however is highly restricted. Its argument can only be one of two things, and the result can only be one of two things as well. So there are 4 different implementations such a function could possibly have. Therefore this restriction gives us a great deal of analyzability.

For example, since the argument is of type Bool and not Any, the implementation mostly writes itself. We need to consider only two possibilities. Bool (a type of size 2) is fundamentally easier to reason about than Any (a type of potentially infinite size). Similarly, any usage of the function is easy to reason about. A caller can be sure not to call it with arguments other than True or False, and enlist the help of a type system to guarantee that expressions involving the function are meaningful.

Total functional programming

Programming in non-total languages affords us the power of general recursion and “fast and loose reasoning” where we can transition between valid states through potentially invalid ones. The cost is, of course, the halting problem. But more than that, we can no longer be certain that our programs are meaningful, and we lose some algebraic reasoning. For example, consider the following:

1
map (- n) (map (+ n) xs)) == xs

This states that adding n to every number in a list and then subtracting n again should be the identity. But what if n actually throws an exception or never halts? In a non-total language, we need some additional information. Namely, we need to know that n is total.

Referential transparency and side effects

The example above also serves to illustrate the trade-off between purely functional and impure programming. If n could have arbitrary side effects, algebraic reasoning of this sort involving n is totally annihilated. But if we know that n is referentially transparent, algebraic reasoning is preserved. The power of side effects comes at the cost of algebraic reasoning. This price includes loss of compositionality, modularity, parallelizability, and parametricity. Our programs can do strictly more things, but we can conclude strictly fewer things about our programs.

Example from infosec

There is a principle in computer security called The Principle of Least Privilege. It says that a user or program should have exactly as much authority as necessary but no more. This constrains the power of the entity, but greatly enhances the power of others to predict and reason about what the entity is going to do, resulting in the following benefits:

  • Compositionality – The fewer privileges a component requires, the easier it is to deploy inside a larger environment. For the purposes of safety, higher privileges are a barrier to composition since a composite system requires the highest privileges of any of its components.
  • Modularity – A component with restricted privileges is easier to reason about in the sense that its interaction with other components will be limited. We can reason mechanically about where this limit actually is, which gives us better guarantees about the the security and stability of the overall system. A restricted component is also easier to test in isolation, since it can be run inside an overall restricted environment.

Example from politics

Some might notice an analogy between the Principle of Least Privilege and the idea of a constitutionally limited government. An absolute dictatorship or pure democracy will have absolute power to enact whatever whim strikes the ruler or majority at the moment. But the overall stability, security, and freedom of the people is greatly enhanced by the presence of legal limits on the power of the government. A limited constitutional republic also makes for a better neighbor to other states.

More generally, a ban on the initiation of physical force by one citizen against another, or by the government against citizens, or against other states, makes for a peaceful and prosperous society. The “cost” of such a system is the inability of one person (or even a great number of people) to impose their preferences on others by force.

An example from mathematics

The framework of two-dimensional Euclidean geometry is simply an empty page on which we can construct lines and curves using tools like a compass and straightedge. When we go from that framework to a Cartesian one, we constrain ourselves to reasoning on a grid of pairs of numbers. This is a tradeoff between expressivity and analyzability. When we move fom Euclidean to Cartesian geometry, we lose the ability to assume isotropy of space, intersection of curves, and compatibility between dimensions. But we gain much more powerful things through the restriction: the ability to precisely define geometric objects, to do arithmetic with them, to generalize to higher dimensions, and to reason with higher abstractions like linear algebra and category theory.

Examples from everyday life

Driving on roads

Roads constrain the routes we can take when we drive or walk. We give up moving in a straight line to wherever we want to go. But the benefit is huge. Roads let us get to where we’re going much faster and more safely than we would otherwise.

Commodity components

Let’s say you make a decision to have only one kind of outfit that you wear on a daily basis. You just go out and buy multiple identical outfits. Whereas you have lost the ability to express yourself by the things you wear, you have gained a certain ability to reason about your clothing. The system is also fault-tolerant and compositional!

Summary

What is this principle? Here are some ways of saying it:

  • Things that are maximally general for first-order applications are minimally useful for higher-order applications, and vice versa.
  • A language that is maximally expressive is minimally analyzable.
  • A simplifying assumption at one semantic level paves the way to a richer structure at a higher semantic level.

What do you think? Can you think of a way to integrate these examples into a general principle? Do you have other favorite examples of this principle in action? Is this something everyone already knows about and I’m just late to the party?

Taking a Break From Twitter

Today I disabled my Twitter account. Probably only temporarily, but we’ll see. This is just a quick note to let everyone know that I’m not leaving Twitter because of anything specific. It’s not that you said or did anything wrong. I just find that it’s becoming an energy sink for me. I’m putting Twitter away as a measure to control my focus, and to better control what information I consume and produce.

Hopefully this means that I will post more here when I have something interesting to share. If you need to reach me, I’m available by email. My address is runar at this blog’s domain.

(UPDATE 2014-12-21): I’ve recreated my Twitter account, but I’m now using the service in a different way. The biggest change is that I don’t follow anyone. It’s strictly a broadcasting device for me, and not an information consumption device.

At Long Last

In early September 2014, we published Functional Programming in Scala. It is now available from all major booksellers, and from the publisher at manning.com/bjarnason. It’s available as a beautiful paper book, on Kindle and other e-book readers, and as a PDF file.

I just want to share my personal story of how this book came to exist. A much shorter version of this story became the preface for the finished book, but here is the long version.

Around 2006 I was working in Austin and coming up on my 8th anniversary as an enterprise Java programmer. I had started to notice that I was making a lot of the same mistakes over and over again. I had a copy of the Gang of Four’s Design Patterns on my desk that I referred to frequently, and I built what I thought were elegant object-oriented designs. Every new project started out well enough, but became a big ball of mud after a while. My once-elegant class hierarchies gathered bugs, technical debt, and unimplemented features. Making changes started to feel like trudging through a swamp. I was never confident that I wasn’t introducing defects as I went. My code was difficult to test or reuse, and impossible to reason about. My productivity plummeted, and a complete rewrite became inevitable. It was a vicious cycle.

In looking for a more disciplined approach, I came across Haskell and functional programming. Here was a community of people with a sound methodology for reasoning about their programs. In other words, they actually knew what they were doing. I found a lot of good ideas and proceeded to import them to Java. A little later I met Tony Morris, who had been doing the same, on IRC. He told me about this new JVM language, Scala. Tony had a library called Scalaz (scala-zed) that made FP in Scala more pleasant, and I started contributing to that library. One of the other people contributing to Scalaz was Paul Chiusano, who was working for a company in Boston. In 2008 he invited me to come work with him, doing Scala full time. I sold my house and everything in it, and moved to Boston.

Paul co-organized the Boston Area Scala Enthusiasts, a group that met monthly at Google’s office in Cambridge. It was a popular group, mainly among Java programmers who were looking for something better. But there was a clear gap between those who had come to Scala from an FP perspective and those who saw Scala as just a better way to write Java. In April 2010 another of the co-organizers, Nermin Serifovic, said he thought there was “tremendous demand” for a book that would bridge that gap, on the topic of functional programming in Scala. He suggested that Paul and I write that book. We had a very clear idea of the kind of book we wanted to write, and we thought it would be quick and easy. More than four years later, I think we have made a good book.

Paul and I hope to convey in this book some of the excitement that we felt when we were first discovering FP. It’s encouraging and empowering to finally feel like we’re writing comprehensible software that we can reuse and confidently build upon. We want to invite you to the world of programming as it could be and ought to be.

– Rúnar Óli Bjarnason

Boston, August 2014

Monoid Morphisms, Products, and Coproducts

Today I want to talk about relationships between monoids. These can be useful to think about when we’re developing libraries involving monoids, and we want to express some algebraic laws among them. We can then check these with automated tests, or indeed prove them with algebraic reasoning.

This post kind of fell together when writing notes on chapter 10, “Monoids”, of Functional Programming in Scala. I am putting it here so I can reference it from the chapter notes at the end of the book.

Monoid homomorphisms

Let’s take the String concatenation and Int addition as example monoids that have a relationship. Note that if we take the length of two strings and add them up, this is the same as concatenating those two strings and taking the length of the combined string:

1
(a.length + b.length) == (a + b).length

So every String maps to a corresponding Int (its length), and every concatenation of strings maps to the addition of corresponding lengths.

The length function maps from String to Int while preserving the monoid structure. Such a function, that maps from one monoid to another in such a preserving way, is called a monoid homomorphism. In general, for monoids M and N, a homomorphism f: M => N, and all values x:M, y:M, the following law holds:

1
f(x |+| y) == (f(x) |+| f(y))

The |+| syntax is from Scalaz and is obtained by importing scalaz.syntax.monoid._. It just references the append method on the Monoid[T] instance, where T is the type of the arguments.

This law can have real practical benefits. Imagine for example a “result set” monoid that tracks the locations of a particular set of records in a database or file. This could be as simple as a Set of locations. Concatenating several thousand files and then proceeding to search through them is going to be much slower than searching through the files individually and then concatenating the result sets. Particularly since we can potentially search the files in parallel. A good automated test for our result set monoid would be that it admits a homomorphism from the data file monoid.

Monoid isomorphisms

Sometimes there will be a homomorphism in both directions between two monoids. If these are inverses of one another, then this kind of relationship is called a monoid isomorphism and we say that the two monoids are isomorphic. More precisely, we will have two monoids A and B, and homomorphisms f: A => B and g: B => A. If f(g(b)) == b and g(f(a)) == a, for all a:A and b:B then f and g form an isomorphism.

For example, the String and List[Char] monoids with concatenation are isomorphic. We can convert a String to a List[Char], preserving the monoid structure, and go back again to the exact same String we started with. This is also true in the inverse direction, so the isomorphism holds.

Other examples include (Boolean, &&) and (Boolean, ||) which are isomorphic via not.

Note that there are monoids with homomorphisms in both directions between them that nevertheless are not isomorphic. For example, (Int, *) and (Int, +). These are homomorphic to one another, but not isomorphic (thanks, Robbie Gates).

Monoid products and coproducts

If A and B are monoids, then (A,B) is certainly a monoid, called their product:

1
2
3
4
5
6
def product[A:Monoid,B:Monoid]: Monoid[(A, B)] =
  new Monoid[(A, B)] {
    def append(x: (A, B), y: => (A, B)) =
      (x._1 |+| y._1, x._2 |+| y._2)
    val zero = (mzero[A], mzero[B])
  }

But is there such a thing as a monoid coproduct? Could we just use Either[A,B] for monoids A and B? What would be the zero of such a monoid? And what would be the value of Left(a) |+| Right(b)? We could certainly choose an arbitrary rule, and we may even be able to satisfy the monoid laws, but would that mean we have a monoid coproduct?

To answer this, we need to know the precise meaning of product and coproduct. These come straight from Wikipedia, with a little help from Cale Gibbard.

A product M of two monoids A and B is a monoid such that there exist homomorphisms fst: M => A, snd: M => B, and for any monoid Z and morphisms f: Z => A and g: Z => B there has to be a unique homomorphism h: Z => M such that fst(h(z)) == f(z) and snd(h(z)) == g(z) for all z:Z. In other words, the following diagram must commute:

A coproduct W of two monoids A and B is the same except the arrows are reversed. It’s a monoid such that there exist homomorphisms left: A => W, right: B => W, and for any monoid Z and morphisms f: A => Z and g: B => Z there has to be a unique homomorphism h: W => Z such that h(left(a)) == f(a) and h(right(b)) == g(b) for all a:A and all b:B. In other words, the following diagram must commute:

We can easily show that our productMonoid above really is a monoid product. The homomorphisms are the methods _1 and _2 on Tuple2. They simply map every element of (A,B) to a corresponding element in A and B. The monoid structure is preserved because:

1
2
(p |+| q)._1 == (p._1 |+| q._1)
(p |+| q)._2 == (p._2 |+| q._2)

And for any other monoid Z, and morphisms f: Z => A and g: Z => B, we can construct a unique morphism from Z to (A,B):

1
2
def factor[A:Monoid,B:Monoid,Z:Monoid](z: Z)(f: Z => A, g: Z => B): (A, B) =
  (f(z), g(z))

And this really is a homomorphism because we just inherit the homomorphism law from f and g.

What does a coproduct then look like? Well, it’s going to be a type C[A,B] together with an instance coproduct[A:Monoid,B:Monoid]:Monoid[C[A,B]]. It will be equipped with two monoid homomorphisms, left: A => C[A,B] and right: B => C[A,B] that satisfy the following (according to the monoid homomorphism law):

1
2
(left(a1) |+| left(a2)) == left(a1 |+| a2)
(right(b1) |+| right(b2)) == right(b1 |+| b2)

And additionally, for any other monoid Z and homomorphisms f: A => Z and g: B => Z we must be able to construct a unique homomorphism from C[A,B] to Z:

1
def fold[A:Monoid,B:Monoid,Z:Monoid](c: C[A,B])(f: A => Z, g: B => Z): Z

The simplest thing that could possibly work

We can simply come up with a data structure required for the coproduct to satisfy a monoid. We will start with two constructors, one for the left side, and another for the right side:

1
2
3
sealed trait These[+A,+B]
case class This[A](a: A) extends These[A,Nothing]
case class That[B](b: B) extends These[Nothing,B]

This certainly allows an embedding of both monoids A and B. But These is now basically Either, which we know doesn’t quite form a monoid. What’s needed is a zero, and a way of appending an A to a B. The simplest way to do that is to add a product constructor to These:

1
case class Both[A,B](a: A, b: B) extends These[A,B]

Now These[A,B] is a monoid as long as A and B are monoids:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def coproduct[A:Monoid,B:Monoid]: Monoid[These[A, B]] =
  new Monoid[These[A, B]] {
    def append(x: These[A, B], y: These[A, B]) = (x, y) match {
      case (This(a1), This(a2)) => This(a1 |+| a2)
      case (That(b1), That(b2)) => That(b1 |+| b2)
      case (That(b), This(a)) => Both(a, b)
      case (This(a), That(b)) => Both(a, b)
      case (Both(a1, b), This(a)) => Both(a1 |+| a, b)
      case (Both(a, b1), That(b)) => Both(a, b1 |+| b)
      case (This(a1), Both(a, b)) => Both(a1 |+| a, b)
      case (That(b), Both(a, b1)) => Both(a, b1 |+| b)
      case (Both(a1, b1), Both(a2, b2)) => Both(a1 |+| a2, b1 |+| b2)
    }
    val zero = Both(A.zero, B.zero)
  }

These[A,B] is the smallest monoid that contains both A and B as submonoids (the This and That constructors, respectively) and admits a homomorphism from both A and B. And notice that we simply added the least amount of structure possible to make These[A,B] a monoid (the Both constructor). But is it really a coproduct?

First we must prove that This and That really are homomorphisms. We need to prove the following two properties:

1
2
(This(a1) |+| This(a2)) == This(a1 |+| a2)
(That(b1) |+| That(b2)) == That(b1 |+| b2)

That’s easy. The first two cases of the append method on the coproduct monoid prove these properties.

But can we define fold? Yes we can:

1
2
3
4
5
6
7
def fold[A:Monoid,B:Monoid,Z:Monoid](
  these: These[A,B])(f: A => Z, g: B => Z): Z =
    these match {
      case This(a) => f(a)
      case That(b) => g(b)
      case Both(a, b) => f(a) |+| g(b)
    }

But is fold really a homomorphism? Let’s not assume that it is, but test it out.Here’s the homomorphism law:

1
fold(f,g)(t1 |+| t2) == fold(f,g)(t1) |+| fold(f,g)(t2)

What happens if both t1 or t2 are This or That?

1
2
3
4
fold(f,g)(This(a1) |+| This(a2)) ==
fold(f,g)(This(a1)) |+| fold(f, g)(This(a2))

f(a1) |+| f(a2) == f(a1) |+| f(a2)

That holds. But what if we introduce a Both on one side?

1
2
3
4
fold(f,g)(This(a1) |+| Both(a2, b)) ==
fold(f,g)(This(a1)) |+| fold(f,g)(Both(a2,b))

f(a1 |+| a2) |+| g(b) == f(a1) |+| f(a2) |+| g(b)

So far so good. That holds because of associativity. What about the other side?

1
2
3
4
fold(f,g)(That(b1) |+| Both(a, b2)) ==
fold(f,g)(That(b1)) |+| fold(f,g)(Both(a,b2))

f(a) |+| g(b1 |+| b2) == g(b1) |+| f(a) |+| g(b2)

No! Something has gone wrong. This will only hold if the Z monoid is commutative. So in general, These[A,B] is not the coproduct of A and B. My error was in the Both constructor, which commutes all B values to the right and all A values to the left.

So what kind of thing would work? It would have to solve this case:

1
  case (Both(a1, b1), Both(a2, b2)) => ???

We need to preserve that a1, b1, a2, and b2 appear in that order. So clearly the coproduct will be some kind of list!

We could modify the Both constructor this way:

1
case class Both[A,B](a: These[A,B], b: These[A,B]) extends These[A,B]

And let’s add an empty case for the combined zero:

1
case class Neither[A,B]() extends These[A,B]

In which case These[A,B] has become a kind of tree, or an unbalanced list of This[A] and That[B] values. A free product of the two monoids A and B. The implementation of append for the coproduct monoid could be:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def coproduct[A:Monoid,B:Monoid]: Monoid[These[A, B]] =
  new Monoid[These[A, B]] {
    def append(x: These[A, B], y: These[A, B]) = (x, y) match {
      case (Neither(), _) => y
      case (_, Neither()) => x
      case (This(a1), This(a2)) => This(a1 |+| a2)
      case (That(b1), That(b2)) => That(b1 |+| b2)
      case (This(a1), Both(This(a2), z)) => Both(This(a1 |+| a2), z)
      case (That(b1), Both(That(b2), z)) => Both(That(b1 |+| b2), z)
      case (Both(z, This(a1)), This(a2)) => Both(z, This(a1 |+| a2))
      case (Both(z, That(b1)), That(b2)) => Both(z, That(b1 |+| b2))
      case _ => Both(x, y)
    }
    val zero = Neither[A,B]()
  }

This append normalizes the list so that consecutive values of the same type are added together.

And we would modify fold to recurse over the tree:

1
2
3
4
5
6
7
def fold[Z:Monoid](these: These[A,B])(f: A => Z, g: B => Z): Z =
  these match {
    case Neither() => mzero[Z]
    case This(a) => f(a)
    case That(b) => g(b)
    case Both(a, b) => fold(a)(f, g) |+| fold(b)(f, g)
  }

This is now a homomorphism! We already know that this is so for the This and That cases. And now that the Both case simply appeals to the inductive hypothesis, we know that it holds for Both as well.

Free monoids on coproducts

To better understand what’s going on, let’s try going the other way. What if we start with the coproduct of the underlying sets and get a free monoid from there?

The underlying set of a monoid A is just the type A without the monoid structure. The coproduct of types A and B is the type Either[A,B]. Having “forgotten” the monoid structure of both A and B, we can recover it by generating a free monoid on Either[A,B], which is just List[Either[A,B]]. The append operation of this monoid is list concatenation, and the identity for it is the empty list.

Clearly List[Either[A,B]] is a monoid, but does it permit a homomorphism from both monoids A and B? If so, then the following properties should hold:

1
2
List(Left(a1)) ++ List(Left(a2))) == List(Left(a1 |+| a2))
List(Right(b1)) ++ List(Right(b2))) == List(Right(b1 |+| b2))

They clearly do not hold! The lists on the left of == will have two elements and the lists on the right will have one element. Can we do something about this?

Well, the fact is that List[Either[A,B]] is not exactly the monoid coproduct of A and B. It’s “too big” in a sense. But if we were to reduce the list to a normal form that approximates a “free product”, we can get a coproduct that matches our definition above. What we need is a new monoid:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
sealed class Eithers[A,B](val toList: List[Either[A,B]]) {
  def ++(p: Eithers[A,B]): Eithers[A,B] =
    Eithers(toList ++ p.toList)
}

object Eithers {
  def apply[A:Monoid,B:Monoid](xs: List[Either[A,B]]): Eithers[A,B] =
    xs.foldRight(List[Either[A,B]]()) {
      case (Left(a1), Left(a2) :: xs) => Left(a1 |+| a2) :: xs
      case (Right(b1), Right(b2) :: xs) => Right(b1 |+| b2) :: xs
      case (e, xs) => e :: xs
    }
  def empty[A,B]: Eithers[A,B] = new Eithers(Nil)
}

Eithers[A,B] is a kind of List[Either[A,B]] that has been normalized so that consecutive As and consecutive Bs have been collapsed using their respective monoids. So it will contain alternating A and B values.

This is now a monoid coproduct because it permits monoid homomorphisms from A and B:

1
2
3
4
Eithers(List(Left(a1))) ++ Eithers(List(Left(a2)))) ==
  Eithers(List(Left(a1 |+| a2)))
Eithers(List(Right(b1))) ++ Eithers(List(Right(b2)))) ==
  Eithers(List(Right(b1 |+| b2)))

And we can implement the fold homomorphism:

1
2
3
4
5
6
def fold[A:Monoid,B:Monoid,Z:Monoid](
  es: Eithers[A,B])(f: A => Z, g: B => Z): Z =
    es.toList.foldRight(mzero[Z]) {
      case (Left(a), z) => f(a) |+| z
      case (Right(b), z) => g(b) |+| z
    }

And this time fold really is a homomorphism, and we can prove it by case analysis. Here’s the law again:

1
(fold(e1)(f,g) |+| fold(e2)(f,g)) == fold(e1 ++ e2)(f,g)

If either of e1 or e2 is empty then the result is the fold of the other, so those cases are trivial. If they are both nonempty, then they will have one of these forms:

1
2
3
4
5
6
7
8
9
10
11
e1 = [..., Left(a1)]
e2 = [Left(a2), ...]

e1 = [..., Right(b1)]
e2 = [Right(b2), ...]

e1 = [..., Left(a)]
e2 = [Right(b), ...]

e1 = [..., Right(b)]
e2 = [Left(a), ...]

In the first two cases, on the right of the == sign in the law, we perform a1 |+| a2 and b1 |+| b2 respectively before concatenating. In the other two cases we simply concatenate the lists. The ++ method on Eithers takes care of doing this correctly for us. On the left of the == sign we fold the lists individually and they will be alternating applications of f and g. So then this law amounts to the fact that f(a1 |+| a2) == f(a1) |+| f(a2) in the first case, and the same for g in the second case. In the latter two cases this amounts to a homomorphism on List. So as long as f and g are homomorphisms, so is fold(_)(f,g). Therefore, Eithers[A,B] really is a coproduct of A and B.

Since we have already convinced ourselves that These is a monoid coproduct, we could also simply show that there is a homomorphism from Eithers to These:

1
2
3
4
5
6
def toThese[A:Monoid,B:Monoid](es: Eithers[A,B]): These[A,B] =
  es match {
    case Nil => Neither()
    case Left(a) :: t => This(a) |+| toThese(t)
    case Right(b) :: t => That(b) |+| toThese(t)
  }

Which would amount to proving that (toThese(xs) |+| toThese(ys)) = toThese(xs ++ ys).

The lesson learned here is to check assumptions and test against laws. Things are not always as straightforward as they seem.

Free Monads and the Yoneda Lemma

Last week I gave a talk on Purely Functional I/O at Scala.io in Paris. The slides for the talk are available here. In it I presented a data type for IO that is supposedly a “free monad”. But the monad I presented is not exactly the same as scalaz.Free and some people have been asking me why there is a difference and what that difference means.

IO as an application of Free

The Free monad in Scalaz is given a bit like this:

1
2
3
sealed trait Free[F[_],A]
case class Return[F[_],A](a: A) extends Free[F,A]
case class Suspend[F[_],A](s: F[Free[F, A]]) extends Free[F,A]

And throughout the methods on Free, it is required that F is a functor because in order to get at the recursive step inside a Suspend, we need to map over the F somehow.

But the IO monad I gave in the talk looks more like this:

1
2
3
sealed trait IO[F[_],A]
case class Return[F[_],A](a: A) extends IO[F,A]
case class Req[F[_],I,A](req: F[I], k: I => IO[F,A]) extends IO[F,A]

And it could actually be stated as an application of Free:

1
type IO[F[_],A] = Free[({type λ[α] = (F[I], I => α) forSome {type I}})#λ, A]

So in a very superficial sense, this is how the IO monad relates to Free. The monad IO[F,_] for a given F is the free monad generated by the functor (F[I], I => _) for some type I. And do note that this is a functor no matter what F is.

IO as equivalent to Free

There is a deeper sense in which IO and Free are actually equivalent (more precisely, isomorphic). That is, there exists a transformation from one to the other and back again. Since the only difference between IO and Free is in the functors F[_] vs ∃I. (F[I], I => _), we just have to show that these two are isomorphic for any F.

The Yoneda lemma

There is an important result in category theory known as the Yoneda lemma. What it says is that if you have a function defined like this…

1
def map[B](f: A => B): F[B]

…then you certainly have a value of type F[A]. All you need is to pass the identity function to map in order to get the value of type F[A] out of this function. In fact, a function like this is in practice probably defined as a method on a value of type F[A] anyway. This also means that F is definitely a functor.

The Yoneda lemma says that this goes the other way around as well. If you have a value of type F[A] for any functor F and any type A, then you certainly have a map function with the signature above.

In scala terms, we can capture this in a type:

1
2
3
trait Yoneda[F[_], A] {
  def map[B](f: A => B): F[B]
}

And the Yoneda lemma says that there is an isomorphism between Yoneda[F,A] and F[A], for any functor F and any type A. Here is the proof:

1
2
3
4
5
6
7
import scalaz._

def toYo[F[_]:Functor,A](fa: F[A]) = new Yoneda[F,A] {
  def map[B](f: A => B) = Functor[F].map(fa)(f)
}

def froYo[F[_],A](yo: Yoneda[F,A]) = yo.map(a => a)

CoYoneda

Of course, this also means that if we have a type B, a function of type (B => A) for some type A, and a value of type F[B] for some functor F, then we certainly have a value of type F[A]. This is kind of obvious, since we can just pass the B => A and the F[B] to the map function for the functor and get our F[A].

But the opposite is also true, and that is the really interesting part. If we have a value of type F[A], for any F and A, then we can always destructure it into a value of type F[B] and a function of type B => A, at least for some type B. And it turns out that we can do this even if F is not a functor.

This is the permutation of the Yoneda lemma that we were using in IO above. That is, IO[F, A] is really Free[({type λ[α] = CoYoneda[F,α]})#λ, A], given:

1
2
3
4
5
trait CoYoneda[F[_], A] {
  type I
  def f: I => A
  def fi: F[I]
}

And the lemma says that CoYoneda[F,A] is isomorphic to F[A]. Here is the proof:

1
2
3
4
5
6
def toCoYo[F[_],A](fa: F[A]) = new CoYoneda[F,A] {
  type I = A
  val f = (a: A) => a
  val fi = fa
}
def froCoYo[F[_]:Functor,A](yo: CoYo) = Functor[F].map(yo.fi)(yo.f)

Of course, this destructuring into CoYoneda using the identity function is the simplest and most general, but there may be others for specific F and A depending on what we know about them.

So there you have it. The scalaz.Free monad with its Suspend(F[Free[F,A]]) constructor and the IO monad with its Req(F[I], I => IO[F,A]) constructor are actually equivalent. The latter is simply making use of CoYoneda to say the same thing.

Why bother? The useful part is that CoYoneda[F,_] is a functor for any F, so it’s handy to use in a free monad since we can then drop the requirement that F is a functor. What’s more, it gives us map fusion for free, since map over CoYoneda is literally just function composition on its f component. Although this latter is, in the absence of tail call elimination, not as useful as it could be in Scala.

I hope that sheds a little bit of light on the Yoneda lemma as well as the different embeddings of free monads.