### Introduction

What software developer likes surprises, especially in production? Not me. At least not when I am on call. ;)

Wouldn't it be great if we were able to **reason** about our code to make our lives as simple as possible when debugging or eliminate debugging altogether?

The big idea of parametricity is that from parametric (or generic) *"pure"* functions we can determine interesting properties (and more of them) from the parametric (generic) types than the equivalent function fixed with specific types.

### Goals

The following article is an attempt to:

introduce ideas and terms informally to target industry practitioners used to mainstream languages and methods

motivate the utility of various kinds of reasoning applied to software development in practice through familiar examples

highlight (again through examples) the utility of defining functions to be as generic as possible

describe tradeoffs in theory and ways to mitigate them using GHC pragmas

provide links to further reading on the topic for the reader to investigate more formally if desired

### Prerequisites & Terminology

The idea of parametricity only works when three things hold:

We reason about

**"pure"**functions only. i.e., parametricity cannot hold if a function is dependent on anything outside of its explicit inputs (or static definitions). Otherwise, we are unable to**reason**about the code in this delightful way.Core business logic (in my experience) can be implemented using "pure" functions in most high-level languages today. However, some languages and ecosystems provide better feedback loops to help you define "pure" functions.

Our

*"pure"*functions should be**total**for correctness. A total function is one that produces a result for all inputs to the function, even if it takes a while to compute. In practice, this means we can't have exceptions raised or infinite recursion for any input the function takes.Your source language supports

**parametric polymorphism**. Parametric polymorphism is when a function or a data type can be written generically so that it handles values without depending on their type in any way. If we have a generic type signature but use runtime reflection to determine implementation, then all bets are off, and we cannot use parametricity to reason about it.

I will use the terms *properties* and *theorems* interchangeably in this article, preferring the former given my target audience. The core material in this article is loosely based on Philip Wadler's *Theorems for Free!* paper, which uses the term *theorems* exclusively.

### A Familiar Form of Reasoning

Programmers of every stripe already use thinking tools from one form
of reasoning no matter their language of choice: **logical reasoning**.

With logical reasoning, we have three primary thinking tools available to us, which we can use in isolation or combination with each other. They are:

abduction

induction

deduction

#### Abduction

Abduction (in logical reasoning) is the act of attempting to infer the
best explanation for a given set of observations. As programmers in
working in industry, we employ this logical reasoning tool every time
we tackle a bug report to varying levels of effectiveness. A bug
report usually describes a set of observations by users who have
witnessed our software doing something unexpected. Assuming this
observed behavior is a defect, we then try to formulate hypotheses
about where in the code this could be going wrong based on our
understanding of the codebase and our understanding of how the data
flows through the software. This specific act is *abduction*!

#### Induction

When given a rule, we can sometimes generalize the rule to check if the rule holds for larger structures. When we have a way to generate the next value, and there is strict total ordering (i.e., ∀ x, y ∈ S, either x < y or x > y), then we can use the thinking tool of induction to prove or disprove whether the rule holds.

As an example, given a rule that the sum of all natural numbers
incrementing by one from zero to a number `n`

is ```
sumOfNats n = n *
(n + 1) / 2
```

we can use induction to show if this does hold with the
following work outline:

First, we start with the

*base case*. For natural numbers the*base case*is zero, 0.`sumOfNats 0 = 0 * (0 + 1) / 2`

. Because anything multiplied by zero is zero, the result of the rule is zero using zero as the input. Our expected value is zero, too (i.e., the sum of all natural numbers incrementing by one starting at zero and ending at zero is zero, right?).Now we assume that the rule holds for some natural number, let's call it

`k`

, that means we have`sumOfNats k = k * (k + 1) / 2`

. All we have done here is alpha conversion in the lambda calculus (i.e., renaming our parameter through the definition consistently). The next step substitutes the next ((k + 1)^{th}) natural number into the rule equation.We expect our sum of natural numbers starting at zero and incrementing by one ending at

`(k + 1)`

to be equal to the expression`1 + 2 + ... + k + (k + 1)`

. Since we assume (from the second step above) that the sum of naturals until`k`

is`k * (k + 1) / 2`

then we expect`k * (k + 1) / 2 + (k + 1)`

to equal`(k + 1) * (k + 2) / 2`

. Now we can use algebra rearrangement to show whether these are equivalent expressions or not:Multiply out terms in numerator of expected result:

`(k * (k + 1) / 2) + (k + 1) = ((k^2 + k) / 2) + (k + 1)`

Expand terms and factor out constant of 1/2:

`1/2*k^2 + 3/2k + 1 = 1/2 * (k^2 + 3k + 2) = 1/2 * (k + 1)*(k + 2)`

Factor polynomial in numerator:

`(k + 1) * (k + 2) / 2 = rule-based result substituting in (k+1) for k in the rule equation`

You might think there is circular logic involved because, in step 2, we assume the rule holds of the k

^{th}number. However, if we tried to*prove*the rule`sumOfNats2 k = k * (k - 1) / 2`

held in step 3 you would not be able to reconcile the expected result with the result computed by the (k + 1)^{th}substitution of the rule. You should try it on your own to convince yourself.

While mathematicians use this form of logical reasoning sometimes, especially in number theory proofs, a growing number of programmers using proof assistant languages are increasingly using induction to prove properties on functions acting on data inputs (as opposed to codata).

#### Deduction

When you have a logical rule, inputs to that rule, or the conclusion made from that rule without the inputs, you can often use the thinking tool of deduction.

When we read code to understand why the system worked the way it did
(like when continuing the effort on fixing the defect from a bug
report from the *abduction* section earlier), we might use deduction
to execute the code in our minds before we run it in a debugger.

Deduction is the thinking tool in logical reasoning that no programmer needs an introduction to since we will typically use this at least once a day.

### Give Me More (forms of reasoning)!

As an industry developer, I like being able to take advantage of other people's reusable work, whether it is a reusable library, our automated testing suites, or reasoning thinking tools developed and tested by mathematicians and scientists over many decades or centuries.

Reasoning provides us confidence, helping us know how our code *can* behave. By understanding the properties of units of a system that can be composed, we can build functionality on top of a solid foundation with fewer surprises in later phases of development.

### Getting Started with Parametricity

Let's look at some examples of what I mean by *properties* about functions from its generic type signatures.

Below is a type [signature] of a function. Assuming it is a pure function and that `a`

can be any type, how many possible definitions can we give the function such that it typechecks and satisfies our prerequisites above?

```
f1 :: a -> a
f1 = _
```

Let's back up a little and have a quick referesher on how to read Haskell:

`f1`

is the name of the function`::`

translates to*"has type"*the first

`a`

after`::`

is the generic type for the argument for`f1`

`->`

can read as*"to"*or*"producing"*depending on how you prefer to read function typesthe

`a`

after the`->`

is the generic type for the result of`f1`

So reading the above code snippet out loud might sound like, *"f1 is a function that has type generic a to generic a"*.

Note: lower case types in Haskell (like `a`

above) are parametric types. When lower case names or letters are in a type signature, the only restriction is that when there are two `x`

types mentions, then they must refer to the same type at the callsite.

The caller of a function determines the specific type used for a generic type.

Now that we can read the Haskell type above let's fix `a`

to a few different types to explore possibilities. We will start with fixing `a`

to `String`

.

```
f1str :: String -> String
f1str = _
```

If the function `f1str`

can depend only on its input then it could be any transformation. In this case, the type `(String -> String)`

there are many kinds of transformations from a given `String`

to another `String`

. For example, we could apply a hash function on the input value or reverse the given string or some combination of the two. There are many possibilities.

Now let's try `(Int -> Int)`

, there are not that many coinciding "transforms" that are valid for both =String=s and =Int=s. Now extending this to any type from input to output we realize there is only one possible definition, which is the identity function:

```
identity :: a -> a
identity a = a
```

What the above says is that given a value of type `a`

the function `identity`

produces that value back as the result.

Yes, ok, that example was trivial, yet we see how when we made the types more generic and parametric we could make more assertions about the definition of the function?

That's a critical piece of the idea behind parametricity.

In this case, we could have compiler machinery generate that function definition for us (the thing on the right-hand-side, RHS, of the `=`

on the second line is the definition) given just the type we wanted and a suitably "pure" and total capable language with parametric polymorphism. What if our compiler machinery generates our programs for us given just a type signature for less trivial examples, wouldn't that be mind-boggling?

### Applying Parametricity (in Haskell)

Now let's take a look at a less trivial example:

```
f2 :: (a -> b) -> [a] -> [b]
f2 f as = _
```

We are given a function from type `a`

to type `b`

. Let's say this is something like `String -> Int`

, which could take the string argument and produce its length as the result value. After taking that function, let's say `length`

, we would be taking a list of strings and returning a list of ints representing their lengths.

What can we say about this function, even the concrete one defined above? Well, the most likely definition of that is:

```
f2 :: (String -> Int) -> [String] -> [Int]
f2 f ss = map f ss
```

Assuming our functions are "pure" and total, by using equational reasoning, we can deduce that `f2`

is `map`

. The more generic we make the types, the surer we are that we can't have other possible definitions for this, which could eventually lead to type-directed programming or program derivations by the compiler.

To continue exploring parametricity we look at a new function `f3`

:

```
f3 :: (a -> b) -> Maybe a -> Maybe b
f3 f ma = _
```

First, we need to understand what the `Maybe`

type constructor is.

Imagine you live in a world where `null`

, `nil`

, `None`

, `NULL`

or other such representations of the null pointer do not exist. In this world, if you are told there will be a value of type `PandaBear`

somewhere, there will be an actual `PandaBear`

value provided. Same for `GrizzlyBear`

. No ifs, buts, or `null`

s.

Sometimes you aren't sure if there will be exactly one value produced. We might only be able to produce zero or one value of a particular type. Therefore we have to be able to tell this to the caller of the code in some way. We do that with the type `Maybe a`

.

#### A `Maybe`

Excursion

The following is what the type definition of `Maybe a`

looks like in Haskell:

`data Maybe a = Just a | Nothing`

This definition means that there are precisely two ways to construct a value of type `Maybe a`

:

using the

`Just`

data constructor (as seen on the right-hand side of the equals sign of the definition) and passing a value of type`a`

using the

`Nothing`

data constructor (which takes no arguments)

To explore how we could use this type we show two functions below:

```
-- We can use our @sqrt@ function like so:
-- >>> sqrt 1
-- Just 1
-- >>> sqrt 0
-- Nothing
-- >>> sqrt -1
-- Nothing
-- >>> sqrt 9
-- Just 3
sqrt :: Double -> Maybe Double
sqrt i
| i >= 0 = Just $ Math.sqrt i
| otherwise = Nothing
-- This function can be used like so:
-- >>> let withDefault = defaulting 60
-- >>> withDefault (Just 30)
-- 30
-- >>> withDefault Nothing
-- 60
defaulting :: a -> Maybe a -> a
defaulting a1 (Just a2) = a2
defaulting a1 Nothing = a1
```

There are two functions above returning `Maybe a`

. The `sqrt`

function takes a `Double`

value and if it is non-negative it will return a `Nothing`

value since our return type is a `Double`

(rather than `Complex`

). If we had made the type signature of `sqrt`

to be `Double -> Double`

and we know that no such value of `null`

, `nil`

, `None`

, etc. can exist then we can deduce a design error in the type signature if we wish to strive for *total* functions. The definition of the `Double -> Double`

variety of `sqrt`

is problematic due to its ambiguity when given a negative `Double`

input value. This process has identified a problem with the design of our function type.

Another way we could have resolved this is by ensuring the only values that can be passed in as inputs are non-negative by restricting the input type or expanding the result type to the `Complex`

codomain. There are merits to these ideas, but consideration of them is outside of the scope of this post.

Note that because the `sqrt`

function is concrete in its type - assuming we couldn't see its definition - we could not be sure this is giving us back the square root of the given value. We could instead have defined the function to divide an unknown constant by the given number.

In the second function, `defaulting`

, which takes a value as a first argument and a "maybe" value of the same underlying type, `a`

in this case, and for the case that `Maybe a`

contains a value of `a`

it will return that. Otherwise, it returns the first input value.

Both `sqrt`

and `defaulting`

demonstrate the utility of the `Maybe a`

type.

#### Back to properties

Let's get back to the original function we were going to investigate: `(a -> b) -> Maybe a -> Maybe b`

. Again, assuming it is a pure function, what can we assert from the type signature?

It seems obvious that if we can assume nothing about type `a`

or `b`

because they can be ANYTHING AT ALL, then there is only one definition of the function that can exist that uses all arguments. Let's see what this might look like with the definition expanded for `f3 :: (a -> b) -> Maybe a -> Maybe b`

:

```
f3 :: (a -> b) -> Maybe a -> Maybe b
f3 f (Just x) = Just $ f x
f3 f Nothing = Nothing
```

Here we are using pattern matching. This allows us to *branch* the code path between the cases when a `Just x`

value is provided versus when we are provided `Nothing`

as input. It yields (in my opinion) more readable code than typical `if/else`

or switch statement branching.

When we program in a language that doesn't have have null pointers as a way of life, the compiler can warn us when we haven't covered all data constructors for a type you are matching on, eliminating one common cause of bugs in software.

We can reduce `f3`

to the following definition where you can find `fmap`

in Haskell's Prelude (a Prelude is an automatically imported standard library of functions and types).

```
f3 :: (a -> b) -> Maybe a -> Maybe b
f3 f ma = fmap f ma
```

The basic idea here is that we apply the function from `(a -> b)`

on the value of `Maybe a`

in the case that it is a `Just a`

value. If it is a `Nothing`

value it will result in a `Nothing`

in the type of `Maybe b`

for the result of the function.

By *equational* reasoning we can see that `f3`

must be equal to our friend `fmap`

in the `Data.Functor`

module because the `f`

and `ma`

are the same values and positions on both the left-hand side (LHS) and the right-hand side (RHS). Therefore, `f3 = fmap`

.

By analysing the type of `f3`

we determined how a function must be defined in order to produce a "pure" and total function without ignoring arguments. As a consequence we identified it must be the same function as another already available to us in the Prelude simply by looking at its type.

### Conclusion

Hopefully, I have been able to illustrate the value of using the basic idea of parametricity (even in a very informal way) as both a way to reason about other people's code and also offering an example of how inspecting types signatures using a basic understanding of parametricity can show logical design level errors in your API design.

To recap, in this informal high-level review of **parametricity**, we have seen that it enables us to:

analyze third party code which we may not have the source for

identify design flaws in our types

generate code for us when there is only one possible definition or narrow down the universe of possibilities to a small number

We should beware that writing general code comes at costs (potentially performance), however, with compilers like GHC we can provide hints to the code generator to `SPECIALIZE`

for commonly used types to allow for more inlining.

### Original Source

This article was adapted (for English writing style) from my original Gist in 2014: Original Gist