Sunday, January 20, 2019

Syntax for composition of functions

Whenever I see function composition in a new context, I make sure to look up its precise definition.  There is a choice that must be made in the definition that has always seemed arbitrary to me...at least until now.  I no longer think the choice is completely arbitrary, and I have an opinion about which choice is better.

The choice that I am referring to is about the order in which the two functions being composed appear next to the composition operator (which I find is always an infix operator).  Typically when defining function composition, the names given to the two function being composed are f and g.  Since I am trying to contrast two possible orderings, I don't want the implied alphabetic ordering between f and g to introduce bias.  So instead, I will use the symbols # and & to denote the two functions being composed.

For a set S, suppose # : S -> S and & : S -> S are two unary functions from S to S (i.e. they are endofunctions).  Furthermore, let's have the function composition operator be implicit, which just means that #& is a composition of these two functions.  The ambiguity is this: does #& mean that # is evaluated first and then & or is it the other way around?  More specifically, is the composition operator defined so that #&(s) = &(#(s)) or is it defined so that #&(s) = #(&(s))?

In my experience with mathematics, I have seen it defined both ways.  If I try to mentally convert my experiences to statistics, the anecdotal conclusion I reach is that I have seen it defined each way about half the time.  That is one reason why the choice seemed arbitrary to me.  If there was an overall advantage with one definition, I would expect to see more of a consensus.

The other reason that this choice seemed arbitrary to me is that I could think of exactly one argument in favor of each definition, and these two arguments seemed about equally strong to me.

The advantage of #&(s) = &(#(s)) is that the order in which the functions are evaluated is the order in which they appear in the composition syntax.  Technically, this latter ordering depends on the implicit assumption that we are encountering this notation in the context of a natural language like English where the implied ordering is left-to-right (and then top-to-bottom).

On the other hand, the advantage of #&(s) = #(&(s)) is that when expanding the left side into the right side, the order in which the functions appear does not need to change.

So until recently, I assumed that the first definition was likely used more often in situations that involved more functions being composed while the second definition was used more in situations that involved more functions being decomposed in order to evaluate them both one at a time.

But now I think the first choice of #&(s) = &(#(s)) is superior.

To explain why, let's start with just a single function f.  It is very common to see the evaluation of f on an input x written as f(x).  Sometimes the input to f is not a single value like x but an expression like a + b as in f(a + b).  In this case, a + b is computed first then f is evaluated on the resulting value.  But this means that the computation is not occurring in order implied by English: the computation is happening from right-to-left, but we read this expression from left-to-right.

The placement of the function (or operator) f before its input x as in f(x) is called prefix notation.  If f appears after its input x as in (x)f then it is called postfix notation.  If we were using postfix notation for function application (and kept using an implicit infix notation for function composition), then the two potential definitions for function composition would be (s)#& = ((s)#)& and (s)#& = ((s)&)#.  Now (s)#& = ((s)#)& is the obvious winner.  It has both of the advantages that we pointed out before (namely, the functions appear in the "English" left-to-right order in which they are evaluated and the order of the functions is unchanged when decomposing the composition) leaving no advantages for the other definition.

With this in mind, I can rephrase the advantage that preserves the syntactic ordering when decomposing.  A single function is written with its input using prefix notation.  So to define function composition as #&(s) = #(&(s)) is to be more consistent with the prefix notation for function evaluation.  I have to admit, that is a very convincing reason to go with this definition.

When I design a software application, I try break up the problem into several loosely coupled pieces.  One reason for this is to help isolate any bad choices.  In this case, I am suggesting that using prefix notation for function evaluation was a bad choice.  As such, I don't want to feel obligated to perpetuate that bad choice into the definition of function composition as well.  Instead, I want to reconsider the advantages and disadvantages and give #&(s) = &(#(s)) a reasonable chance at being the chosen definition.

C# doesn't have a function composition operator.  It is easy to define an extension method to do this, but I haven't found it to be that useful.  Instead, I often use an extension method called Apply that acts as an infix operator taking a unary function on its right and the input to the unary function on its left.  If you squint so that Apply disappears, then this is a way to expression function application in C# using postfix notation.  Then instead of first composing two functions f and g and then evaluating the result on an input x, I use Apply twice by writing x.Apply(f).Apply(g).  Now the computation occurs in essentially the same ordering implicit in English, namely left-to-right.

This is very similar to the way pipelines are created in Unix.  As such, I find piped Unix commands natural to read (well...expect for the abundant use of acronyms).

Recall that Apply is a binary function and f and g are unary functions.  We can improve readability (by reducing the noise) with bit more work.  If we turn f and g in to extension methods (and capitalize them as is convention), then we can write x.F().G().  And now we can increases the expressiveness by passing in additional values to F and G.  This type of syntax is called method chaining and is used to create fluent interfaces.  In his book Functional Programming in C#, Enrico Buonanno recommends this style saying
The method chaining syntax...provides a more readable way of achieving function composition in C#.
[Section 5.1.2 on Page 104]
Notice that sometimes the method chaining is written with x, F, and G each on their own lines.  This is still consistent with the implicit ordering in English since we read top-to-bottom.

As a final example, consider the function composition operators $ and . in Haskell.  (These operators just vary in precedence.)  When writing code in Haskell that uses these operators, you will tend to write code that effectively executes in the "wrong" order, i.e. from right-to-left.  As an alternative, you can compose the functions in the opposite order using the >>> operator.

The ultimate goal here is to write readable code.  We often say things like "this code is readable" or "that code is not readable" as though readability is a binary condition.  In fact is in a continuum for each person and also varies between people.  What we all (well, many of us) have in common is fluency with a natural language like English that reads left-to-right and top-to-bottom.  By tapping into that common shared experience, we can design programming languages and write code that is more readable for all of us.

2 comments:

  1. Tyson,

    Insightful post.

    It appears you assume that the language uses strict evaluation. Does your opinion on function composition change for languages that use non-strict evaluation semantics? For instance, with call-by-name or call-by-need (lazy), in your example of f(x + y),
    f is always evaluated first before x + y.

    ReplyDelete
    Replies
    1. Hello Kyle,

      Thanks for commenting :)

      I agree that my language implies that I am assuming strict (or eager) evaluation [1]. This assumption was not a conscious choice. However, I think my opinion is independent of the evaluation strategy.

      On its face, the source code does not specify a particular evaluation strategy. Technically, each language specifies an evaluation strategy, but an alternative compiler could be written for each language that uses a different evaluation strategy.

      Anyway, when reading source code, I would prefer to not need to think about such low details as the evaluation strategy. Furthermore, if the code is pure [2], then all evaluation strategies will have the same externally visible behavior.

      I think it is true that eager evaluation is the simplest evaluation strategy. As such, I want to take advantage of that simplicity by assuming that this is the evaluation strategy when reading source code.

      [1] https://en.wikipedia.org/wiki/Eager_evaluation
      [2] https://en.wikipedia.org/wiki/Pure_(programming_language)

      Delete