Casual invocations of Curry-Howard

December 22nd, 2012

The Curry-Howard isomorphism is an important discovery that seems (to me) to be underappreciated in the “working world” of most programming groups I’ve encountered.

Recently, on a client project, I fielded this challenging statement:
“If it really is true that types are propositions and programs are proofs, can you explain what (say) the disjunctive syllogism means as a statement about programs?”

In this case, I had an immediate answer, and I think that others who are coming to terms with Curry-Howard for the first time might also find this example instructive because it touches on some important aspects of reasoning about programs generally.

First, consider what the disjunctive syllogism says as a logical statement:

p + q = ¬p → q

In plain English, this means that saying “either p OR q is true” is exactly equivalent to saying “if p is not true, then q is true”.

Is this equivalence correct? If so, then their truth tables must be equivalent for the same values of p and q, given the definitions of disjunction, negation and implication. Let’s try this.

p | q | p + q
-------------
F | F | F
F | T | T
T | F | T
T | T | T

p | q | ¬p → q
---------------
F | F | F
F | T | T
T | F | T
T | T | T

The tables are the same, QED, this equivalence is true.

Now, what does this equivalence mean as a statement about programs? It means that both expressions are interchangeable, and so as a statement about programs it must be true that a term of type p + q can always be translated to a term of type ¬p → q and vice versa, for any types p and q. Before defining such a translation, let’s consider what each side of the equivalence means when interpreted as types.

The left side is easy enough: it describes the type of variants which can be made from either a p OR a q, for any types p and q.

As typically presented, we can construct a value of type p + q using the left constructor (making the variant with a p) or the right constructor (making the variant with a q):

left  42           :: Int + String
right "fourty-two" :: Int + String

Finally, we can destruct a value V of type p + q using the case construct, which will evaluate one or another branch of code depending on whether the variant is a left or a right:

case V of
   left  x . "int("    ++ show x ++ ")"
   right y . "string(" ++ show y ++ ")"

So whatever ¬p → q is, it has to allow construction and destruction exactly as with variants above. But what does ¬p → q mean as a type? We should consider this in two parts: the meaning of implication as a type, and the meaning of negation as a type.

Implication is simple enough, it represents the type of functions with one argument. If we were considering p → q then we would just have to make a function that takes a p as input, and returns a q as output. If you wrote a function to read floating-point values from strings, its type would probably be String → Float. According to Curry-Howard, this function can be constructed (equivalently, this proposition is true) if assuming that the argument type can be constructed means that the result type can be constructed (of course, if the argument type can’t be constructed then you can do any kind of crazy thing you want, and so the function can still be constructed but never invoked).

Whatever ¬p → q is, it’s going to have to allow construction and destruction like a p + q variant, and also it’s going to be a function taking a ¬p (whatever that is) and returning a q. Interesting that this function we’re describing will always return a q, since a p + q could yield either a p OR a q. So whatever a ¬p is, it’s going to have to let us get at a p and not a q. Weird.

Now we’re left having to explain what negation means on types, what does a ¬p term look like? Negation has a pivotal role in the divergence between classical logic and intuitionistic logic and that break can inform our interpretation of negation as well. In fact, negated types are just a special kind of function type, the type of continuations, and so we can read ¬p as p → F (where F is the “false type” or the type which has no values — this type is usually written as ).

Let’s consider for a moment whether or not this interpretation of p → F meets with the intuition that we (may or may not) have about continuations. A continuation can be treated like a function — that is, you can invoke it with a value — and yes, this is true of p → F. Also, invoking a continuation should never return which is another way of saying that you should never get a value out of it, and in fact that’s exactly the case with p → F as well, since the type F has no values!

With all of that in mind, how does the type ¬p → q act like the variant p + q? It gives us a way to destruct the variant in terms of normal control flow! That is, you can destruct this “variant” by giving it a continuation for the case where it represents a p (which it can invoke in that case), else it must be a q and that’s what will be returned (if the continuation is invoked, a q will never be returned of course).

So, just as before let’s consider how to construct a ¬p → q to act like the variant p + q.

First the left case:

λk:¬p . k 42 :: ¬p → q – same as ‘left 42′

This value is equivalent to left 42 because, when destructed (as we will see shortly), it will force the evaluation of just the left code-path (ie: the continuation supplied).

Now the right case:

λk:¬p . “fourty-two” :: ¬p → q – same as ‘right “fourty-two”

This value is equivalent to right “fourty-two” because it destructs to the right case.

Now let’s examine that destruction, the equivalent of case-analysis on variants, assuming a variable V that represents our “variant” of type ¬p → q:

“string(” ++ show (V (λx:Int.”int(” ++ show x ++ “)”)) ++ “)”

Written this way, the case-analysis appears somewhat awkward — you write it as if the right case will be matched, but when you examine the “variant” you provide a continuation that will handle the case where the left case was actually matched.

Now we’ve witnessed this type-level equivalence at the level of programs, and in fact we can make a number of conclusions from this. First, if you happen to work in a language with continuations but no variants, you can simulate them according to the scheme described by the disjunctive syllogism. Second, if we’re writing a compiler then the disjunctive syllogism gives us a plan for implementing variants (and in fact, GCC and other compilers will perform exactly this translation when compiling if/then/else terms, which are just the destruction side of the popular variant bool, aka 1+1). Finally, and maybe most importantly, the disjunctive syllogism gives us a sound principle for reasoning about the behavior of our programs. This last point, I intend to elaborate in a future post, because the fact that we can use existing systems of logic to analyze our programs offers our business a way out of the poorly specified and overly broad tools of reasoning that generally pervade today.

Comments are closed.