The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

flib

Update We’ve moved the date of our first meeting to June 17th, so you’re still in time to join us! If you want to follow our adventures, you can also ask for an invitation to our mailing list.

The other day, Andy and I met Jos, an experienced schemer who lives near Barcelona, with the idea of having lunch, talking about Scheme, and create a Scheme Users Group. After a bit of discussion, we agreed on widen the group’s scope, and start what we’re calling Fringe Languages In Barcelona (FLIB). The plan is to conduct periodic meetings with a main presentation followed by some lightning talks (the latter were a complete success at ILC, and we’d like to try and see how they work for us), with as much discussion interleaved as we see fit. We’ll have some refreshments available and, since we’re meeting in the very center of the old city, visits to pubs or a restaurant for dinner and further socializing are to be expected.

As i said, we’re expecting much discussion about Scheme and Lisp, but we’re not ruling out by any means other fine languages. For instance, the talk for the inaugural session (scheduled June 10th17th, 7:30 pm) is entitled The implementation of FUEL, Factor’s Ultimate Emacs Library, and it will include a short introduction to Factor (yes, i am the victim speaker). Jos will come next, the same day, with a lightning talk about PLT Redex. We have free slots for more lighting talks: you are invited not only to come, but to give one if you’re so inclined. This being our first meeting, there will be also some time for logistics and organisation.

So, if you’re near here by then, by all means, come in and join the fun:

Calle del Pi 3 Principal Interior (first floor)
Barcelona

Not really needed, but if you’re thinking about coming, sending me a mail beforehand will help us to be sure that we’ve got enough food and drinks.

We’re looking forward to getting FLIB started, and we’re sure that at least grix more fringers are coming! Don’t miss it!

A Taste of Haskell

SimonI finally found time to watch Simon Peyton-Jones’ recent OSCON 2007 tutorial, A Taste of Haskell (split in Part I and II, around 90 minutes each). From the many comments around and a quick look at the slides, I knew it was a basic intro for non-functional programmers and kept wondering if it would be worth the time. From this you can easily infer that either i am a moron or hadn’t seen a talk of Simon’s before. Not that it discards totally the first option, but actually i hadn’t! As it comes out, Simon is a delight to hear and see in action. He’s full of energy and enthusiasm and knows how to transmit his passion for what he does. In this regard, this talk reminds me of Abelson and Sussman’s SICP videos. These guys are not your ivory-tower academic type. They have such a great time doing what they do that they cannot help showing it off. Perhaps it’s because they got their hands dirty implementing the systems they theorize about. At any rate, Simon’s tutorial is absolutely worth watching. If you’re new to Haskell and functional programming, you belong to the perfect audience for this crash course. It is also remarkable that the attendees were quite participative and made lots of questions that made the lecture quite lively. That and Simon’s refreshing sense of humour, which makes these videos great fun even for those of you knowing everything about monads. Enjoy!

Quantum hype

One of the things i would really, really like to see some day is a working quantum computer. Quantum mechanics is deep magic that nobody really understands, but we have learnt a lot about how to use it during the last century–including its application to some kinds of computation. As you surely know, the most outstanding quantum algorithm is Shor’s prime factorization, which allows factoring a number N with a time complexity O({(\log N)}^3). That means that we go from exponential to polynomial time when going from classical to quantum for this particular problem (and related ones: the Wikipedia article on QC gives a pretty good survey; see also David Deutsch’s introductory lectures). I’m stressing the last point because there’s a widespread misconception that quantum computers will be able to solve NP-complete problems in polynomial time. Not so. On the contrary, experts are almost sure by now that this won’t be the case (note, by the way, that factoring is not NP-complete).

The most recent examples of such bogus claims are the reports on D-wave’ demos of their ‘quantum computer’, which are surrounded by piles of hype. So please, before taking them at face value, see Scott Aaronson’s The Orion Quantum Computer Anti-Hype FAQ (more here here here). Scott Aaronson is an expert in the field and the author of a PhD thesis under the title Limits on Efficient Computation in the Physical World (for a less technical introduction to quantum computing, see his nice Quantum Computing Since Democritus lectures). For an executive summary, here’s the first entry in the FAQ:

  • Q: Thanks to D-Wave Systems — a startup company that’s been in the news lately for its soon-to-be-unveiled “Orion” quantum computer — is humanity now on the verge of being able to solve NP-complete problems in polynomial time?
  • A: No. We’re also not on the verge of being able to build perpetual-motion machines or travel faster than light.

The old rule applies: no silver bullet. But, of course, their limitations notwithstanding, quantum computers would (will?) be an interesting challenge for us programmers, and we do not have to wait for the hardware to play with them: see this Brief survey of quantum programming languages, or a more in-depth description of how an imperative quantum programming language looks like, although, if you ask me, functional quantum languages like QML are nicer. Simon Gay has also put together a comprehensive Bibliography of Quantum Programming Languages.

Finally, if you’d rather write some code, there’s André van Tonder’s Scheme simulator (which will work with any R5RS scheme), and a QML simulator written in Haskell. Haskellers will also enjoy Jerzy Karczmarczuk’s Structure and Interpretation of Quantum Mechanics: a functional framework.

Happy quantum hacking!

Writing A Lisp Interpreter In Haskell

Chances are you’ve already seen mentioned somewhere defmacro’s Writing A Lisp Interpreter In Haskell, but just in case:

A while ago, after what now seems like eternity of flirting with Haskell articles and papers, I finally crossed the boundary between theory and practice and downloaded a Haskell compiler. I decided to do a field evaluation of the language by two means. I was going to solve a problem in a domain that Haskell is known to excel at followed by a real world problem that hasn’t had much exploration in Haskell. Picking the problems was easy. There’s a lot of folklore that suggests Haskell is great for building compilers and interpreters so I didn’t have to think long to a pick a problem that would be self contained, reasonably short, and fun – writing an interpreter of a Lisp dialect.

Also interesting in defmacro.org, these ramblings on The Nature of Lisp or on Functional Programming for the rest of us.

I’ve not read these articles in full, but a first skimming over them left a pretty good impression.

Becoming a Haskell developer

YhcIf you like programming languages and compiler and have a few free time in your hands, please take a look at this roadmap to become a YHC developer. The York Haskell Compiler is a relatively new effort to write a, well, Haskell Compiler. Its development team looks all but newbie-friendly, and the job is one of the most interesting (if a bit underpaid) i’ve found lately. Right now, the YHC seems to be under (heavy, one would say) development. It’s based on nhc98, but already outperforms it (as well as the Hugs interpreter). Another interesting trait: it compiles to cross-plaftorm bytecode, runnable from a small runtime system (and there’s even a little surprise to all you pythonistas). This nice presentation (PDF) gives some further details on the system’s innards.

Definitely, worth a closer look.

Haskell goes mainstream

Well, if this LtU news is to be believed, since a couple of weeks ago, Linspire is using Haskell as its main OS development language:

The OS team at Linspire, Inc. would like to announce that we are standardizing on Haskell as our preferred language for core OS development.

We are redoing a bunch of our infrastructure using Haskell as our common standard language. Our first task is redoing our Debian package builder (aka autobuilder) in Haskell. Other tools such as ISO builders, package dependency checkers are in progress. The goal is to make a really tight simple set of tools that will let developers contribute to Freespire, based on Debian tools whenever possible. Our hardware detector, currently in OCaml, is on the block to be rewritten as well.

This surprising announcement was made at the Debian Haskell Mailing List, since Linspire (and its free variant, Freespire) is based in Debian. This is what these brave developers have to say of the topical worries against such a move:

I mention Freespire because some of our colleagues were concerned that using Haskell would isolate us from the larger community of developers and make it hard to find new employees skilled in Haskell, should we need to. From our perspective, functional programming makes us more effective and we think that getting even a few people who know Haskell hacking with us is a better combination than lots of Perl and bash.

If it is a late April’s Fools joke, please someone tell me!

A Haskell bookshelf

My favourite scheme implementation is scheme48, which takes its name from its being initially implemented in 48 hours by Richard Kelsey and Jonathan Rees in August 1986. They used Common Lisp on a Symbolics 3600 and Maclisp on a PDP-10.

Now, thanks to Jonathan Tang’s tutorial, you can write yourself a scheme in 48 hours, but using Haskell instead of Lisp. The tutorial is intended as an introduction to Haskell for Schemers (and (brave) programmers in general) wanting to get started in the purest member of the functional family. And it is extremely fun, for instead of following the boring language features overview way, it plunges right on in interesting, hands-on stuff like parser combinators (one of the most beautiful applications of Haskell, if you ask me) or monads. The lessons include exercises, so that you have really no excuse for not learning Haskell once and for all. Believe me, it’s good for your (mental) health.

If you’re totally new to Haskell and find the pace of Jonathan’s tutorial a bit hard to follow, you may start with a tour of the Haskell syntax and the Hitchikers’ guide to Haskell, after making sure, of course, that you have your towel at hand. As for me, well i’ve been planning to learn Haskell for some years now and, since i happen to love books, i have accumulated a little (but selected) Haskell library, together with a sort of learning roadmap that, unfortunately, i have yet to complete. Just in case the above links whetted your appetite and you happen to have more time than i do, here you have my bookworm guide to Haskell enlightenment.

The little Haskeller

There are many introductory Haskell books, but i was (at the time i started learning) keen of those providing also a good basis on functional programming (which was a totally new world for me at the time). Therefore, my first Haskell book, and still a wholehearted recommendation, was Richard Bird and Philip Wadler‘s Introduction to Functional Programming Using Haskell, one of the most elegant books on programming i’ve ever read.

One of the nicest things about writing Haskell code is that it’s the closest one can probably get to writing pure maths while programming. Thus, learning Haskell is an excellent way to learn more about maths and logic. If that sounds to you like doubling the fun (and you’re not yet a math wizard), The Haskell Road to Logic, Maths and Programming (see also this extensive review (PDF)) by Jan van Eijck and Kees Doets is definitely the book for you. Its purpose is to teach logic and mathematical reasoning in practice, and to connect logical reasoning with Haskell programming. And, in my opinion, it does a pretty good job at that. Although it begins with the very basics, it includes chapters on far from trivial (i.e., fun!) stuff like, for instace, corecursion or Cantor sets. And i found amazing how natural it was to express logical and mathematical ideas in Haskell.

But maybe you prefer to learn a bit of applied Computer Science, instead of abstract maths, with Haskell. No problem: just grab a copy of Fethi Rabhi and Guy Lapalme‘s Algorithms, a functional programming approach, which challenges the academic establishment by teaching algorithms using Haskell, that is, in a purely functional context. Revisiting classical (and apparently imperative) algorithms like sorting, tree and graph traversal or dynamic programming methods under a functional light is a refresing experience, and an eye-opener: the fact that all the typical algorithms are covered in just 256 pages is a testament to the authors’ claim that functional programming leads to smaller, clearer and more elegant program. Besides, as you probably know, Haskell features lazy evaluation, which poses entirely new (and pretty instructive) challenges when it comes to evaluating the efficiency of an algorithm.

The seasoned haskeller

fopOnce you’re comfortable with the language and have learnt all about monads, the best thing to do is to study non-trivial Haskell applications. To that end, you can hardly find a better book than The fun of programming, a Festschrift celebrating Richard Bird’s sixtieth birthday. Including thirteen chapters by luminaries in the field, this book describes fun applications on such fun things as musical composition or graphical design, as well as covering advanced programming techniques such as data structures design, interpreters or optimization. Amazing.

I also own a copy of Paul Hudak‘s The Haskell School of Expression, which is often recommended as as Haskell primer. In my opinion, it goes too quickly into the nitty-gritty details of (music and multimedia) implementations to be a good first book, but it is probably a good second book on Haskell programming. But that’s just me: as you can read in the review above, some people have diverging opinions.

The fun never ends…

After all these readings, and a bit more about monads, you’ll begin to feel an expert, and may be interested in more advanced stuff. I’ve got a couple of books reserved for that moment. The first one is The Algebra of Programming, by Richard Bird and Oege de Moor, and its purpose, simply stated, is to show how to calculate programs algebraically using and old friend of ours: category theory, including the bananas that will be the main theme of my second post on these matters. There you’ll find and in-depth, abstract treatment of algorithmic strategies with an eye of correctness proofs. Although heavy on theory, there are interesting applications of this algebra of programming, like the conversion between binary and decimal numbers in TeX, the best way of drawing a cylinder in LaTeX or good data compression algorithms. A slight nuisance is that this book uses an old Haskell dialect (a precursor, actually) called Gopher, but the conversion to modern notation is pretty straightforward.
Finally, anyone serious about algorithms and data structures should read Chris Okasaki’s Purely Functional Data Structures, and amazing tour on advanced algorithimcs using functional programming languages, with source code in SML and Haskell. The exposition is extremely elegant and synthetic, and i’ve spent many, many hours immersed in its 220 pages (i can’t think of any other programming book with a better signal to noise ratio, really).

That’s it. I just hope that 48-hours days are invented any time soon. Happy reading!

Programmers go bananas

Introduction: lists galore

I learned programming backwards, plunging right on into C and, shortly after, C++ and Java from the very beginning. I was knee deep in complex data structures, pointers and abstruse template syntax in no time. And the more complex it all felt, the more i thought i was learning. Of course, i was clueless.

Reading SICP and learning about functional programming changed it all. There were many occasions for revelation and awe, but one of my most vivid recollections of that time is my gradual discovery of the power of simplicity. At about half way into SICP i realised in wonder that that beautiful and powerful world was apparently being constructed out of extremely simple pieces. Basically, everything was a list. Of course there were other important ingredients, like procedures as first-class objects, but the humble list was about the only data structure to be seen. After mulling on it for a little bit, i saw where lists draw their power from: recursion. As you know, lists are data types recursively defined: a list is either the empty list or an element (its head) followed by another list (its tail):

list = []
list = a : list

where i’m borrowing Haskell’s notation for the empty list ([]) and the list constructor (:), also known by lispers as () and cons. So that was the trick, i thought: lists have recursion built-in, so to speak, and once you’ve read a little bit about functional programming you don’t need to be sold on the power and beauty of recursive programs.

It is often the case that powerful and beautiful yet simple constructs have a solid mathematical foundation, and only when you grasp it do you really realize how powerful, beautiful and amazingly simple that innocent-looking construct is. Lists, and recursive operations on them, are an excellent case in point. But the path connecting them to their mathematical underpinnings is a long and winding one, which lays in the realm of Category Theory.

I first became acquainted of the relationship between categories and recursive programming reading Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer, Maarten Fokkinga and Ross Paterson. Albeit very enjoyable, this paper presupposes a high degree of mathematical sophistication on the reader’s side. I will try in this article to give you a simplified overview of the concepts involved, including Category Theory, its application to programming languages and what funny names like catamorphism, anamorphism or lambda-lifting have to do with your everyday list manipulations. Of course, i’ll be only scratching the surface: interspersed links and the Further reading section provide pointers to more in-depth explorations of this wonderland.

Categorical interlude

CategoriesCategories are (relatively) simple constructs. A category consists of a set O of objects, and a set A of arrows between elements of O. Arrows are composable: if there’s an arrow from a to b, and another one from b to c, there must be an arrow from a to c (where a, b and c are elements of O). Besides, they are associative: if you have arrows from a to b, b to c, and c to d, you can go from a to d via two different paths, namely, first from a to c and then from c to d, or first from a to b and then from b to d. Finally, for each element a in O there’s an identity arrow which goes from a to itself (called an identity), such that following this arrow changes nothing. These properties are better visualized with a diagram (or a bit of mathematical notation), as shown in the image on the right.

A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the ‘arrows’ are provided by the relation “less or equal” (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b). What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock’s trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter‘s ideas).

To boot, our definition of a category already contains the concept of isomorphic objects. Think of an arrow from a to b as an operation that transforms a in b. An arrow from b to a will make the inverse transformation. If composing both transformations gives you the identity, you are back to the very same object a, and we say that a and b are isomorphic: you can transform one into the other and back at will. In a deep sense, this concept captures a generic way of expressing equality that pervades all maths: if you’re not afraid of a little bit of maths, Barry Mazur‘s essay When is a thing equal to some other thing? is an excellent introduction to Category Theory with an emphasis in the concept of equality. Among many other things, you will learn how the familiar natural numbers can be understood as a category, or how an object is completely defined by the set of its transformations (and, therefore, how to actually get rid of objects and talk only of transformations; i know this is stretching and mixing metaphors (if not plain silly), but this stress in arrows, as opposed to objects, reminded me of Alan Kay’s insistence on focusing on messages rather than objects). Another introductory article with emphasis on categories as a means to capture sameness is R. Brown and T. Porter’s Category Theory: an abstract setting for analogy and comparison.

Not only objects inside a category can be transformed into each other. We reveal the common structure of two disjoint categories by means of a functor mapping across two categories. A functor consists of two functions: one that maps each object of the first category to an object in the second, and another one putting in correspondence arrows in one category with arrows in the second. Besides, these functions must preserve arrow composition. Let me spell this mathematically. Consider to categories, C and C’ with object sets O and O’ and arrow sets A and A’. A functor F mapping C to C’ will consist then of two functions (Fo, Fa); the first one taking elements of O to elements of O’:

Fo: O -> O’

Fo(a) in O’ for every a in O

and the second one taking arrows from A to arrows in A’:

Fa: A -> A’

Fa(f) in A’ for every f in A

and such that, if f is an arrow from a to b in C, Fa(f) is an arrow from Fo(a) to Fo(b) in C’. Moreover, we want that following arrows in C is ‘analogous’ to following them in C’, i.e., we demand that

Fa(fg) = Fa(f)Fa(g)

In the left hand side above, we are composing two arrows in C and then going to C’, while in the right hand side we first take each arrow to C’ and, afterwards, compose them in there. If C and C’ have the same structure, these two operations must be equivalent. Finally, F must preserve identities: if i is the identity arrow for an element a in O, Fa(i)must be the identity arrow for Fo(a) in O’. The diagram on the left shows a partial graph (i’m not drawing the identity arrows and their mappings) of a simple functor between two categories, and ways of going from an object a in the first category to an object x in the second one which are equivalent thanks to the functor’s properties.

As you can see in this simple example, the functor gives us the ability of seeing the first category as a part of the second one. You get a category isomorphism in the same way as between objects, i.e., by demanding the existence of a second functor from C’ to C (you can convince yourself that such a functor does not exist in our example, and, therefore, that the two categories in the diagram are not isomorphic).

You have probably guessed by now one nifty property of functors: they let us going meta and define a category whose objects are categories and whose arrows are functors. Actually, Eilenberg and MacLane‘s seminal paper General theory of natural transformations used functors and categories of categories to introduce for the first time categories (natural transformations are structure-preserving maps between functors: this Wikipedia article gives an excellent overview on them).

But enough maths for now: it is high time to show you how this rather abstract concepts find their place in our main interest, programming languages.

Categories and programming languages

About the only similarity between C and Haskell programming is that one spends a lot of time typing ASCII arrows. But of course, Haskell’s are much more interesting: you use them to declare the type of a function, as in

floor:: Real -> Int

The above stanza declares a function that takes an argument of type real and returns an integer. In general, a function taking a single argument is declared in Haskell following the pattern

fun:: a -> b

where a and b are types. Does this ring a bell? Sure it does: if we identify Haskell’s arrows with categorical ones, the language types could be the objects of a category. As we have seen, we need identities

id:: a -> a
id x = x

and arrow composition, which in Haskell is denoted by a dot

f:: b -> c
g:: a -> b
fg:: a -> b -> c
fg = f . g

Besides, associativity of arrow composition is ensured by Haskell’s referential transparency (no side-effects: if you preserve referential transparency by writing side-effect free functions, it won’t matter the order in which you call them): we’ve got our category. Of course, you don’t need Haskell, or a statically typed language for that matter: any strongly typed programming language can be modelled as a category, using as objects its types and as arrows its functions of arity one. It just happens that Haskell’s syntax is particularly convenient, but one can define function composition easily in any decent language; for instance in Scheme one would have

(define (compose f g) (lambda (x) (f (g x)))

Functions with more than one arguments can be taken into the picture by means of currying: instead of writing a function of, say, 2 arguments:

(define (add x y) (+ x y))
(add 3 4)

you define a function which takes one argument (x) and returns a function which, in turn, takes one argument (y) and returns the final result:

(define (add x) (lambda (y) (+ x y)))
((add 3) 4)

Again, Haskell offers a pretty convenient syntax. In Haskell, you can define add as:

add x y = x + y

which gets assigned, when applied to integers, the following type:

add:: Int -> (Int -> Int)

that is, add is not a function from pairs of integers to integers, but a function that takes an integer and returns a function of type Int -> Int. Finally, we can also deal with functions taking no arguments and constant values by introducing a special type, called unit or 1 (or void in C-ish), which has a unique value (spelled () in Haskell). Constants of our language (as, e.g., True or 43.23) are then represented by arrows from 1 to the constant’s type; for instance, True is an 1 -> Boolean arrow. The unit type is an example of what in category theory is known as a terminal object.

Now that we have successfully modelled our (functional) programming language as a category (call it C), we can use the tools of the theory to explore and reason about the language constructs and properties. For instance, functors will let me recover the original motivation of this post and explore lists and functions on them from the point of view of category theory. If our language provides the ability to create lists, its category will contain objects (types) of the ‘list of’ kind; e.g. [Int] for lists of integers, [Boolean] for lists of Booleans and so on. In fact, we can construct a new sub-category CL by considering list types as its objects and functions taking and returning lists as its arrows. For each type a we have a way of constructing a type, [a] in the sub-category, i.e., we have a map from objects in C to objects in CL. That’s already half a functor: to complete it we need a map from functions in C to functions in CL. In other words, we need a way to transform a function acting on values of a given type to a function acting on lists of values of the same type. Using the notation of the previous section:

Fo(a) = [a]
Fa(f: a -> b) = f': [a] -> [b]

Fa is better known as map in most programming languages. We call the process of going from f to f' lifting (not to be confused with a related, but not identical, process known as lambda lifting), and it’s usually pretty easy to write an operator that lifts a function to a new one in CL: for instance in Scheme we would write:

(define (lift f) (lambda (lst) (map f lst)))

and for lift to truly define a functor we need that it behaves well respect to function composition:

(lift (compose f g)) = (compose (lift f) (lift g))

We can convince ourselves that this property actually holds by means of a simple example. Consider the function next which takes an integer to its successor; its lifting (lift next) will map a list of integers to a list of their successors. We can also define prev and (lift prev) mapping (lists of) integers to (lists of) their predecessors. (compose next prev) is just the identity, and, therefore, (lift (compose next prev)) is the identity too (with lifted signature). But we obtain the same function if we compose (lift next) and (lift prev) in CL, right? As before, there’s nothing specific to Scheme in this discussion. Haskell even has a Functor type class capturing these ideas. The class defines a generic lift operation, called fmap that, actually, generalizes our list lifting to arbitrary type constructors:

fmap :: (a -> b) -> (f a -> f b)

where f a is the new type constructed from a. In our previous discussion, f a = [a], but if your language gives you a way of constructing, say, tuples, you can lift functions on given types to functions on tuples of those types, and repeat the process with any other type constructor at your disposal. The only condition to name it a functor, is that identities are mapped to identities and composition is preserved:

fmap id = id
fmap (p . q) = (fmap p) . (fmap q)

I won’t cover usage of type constructors (and their associated functors) other than lists, but just mention a couple of them: monads, another paradigmatic one beautifully (and categorically) discussed by Stefan Klinger in his Programmer’s Guide to the IO Monad – Don’t Panic (also discussed at LtU), and the creation of a dance and music library, for those of you looking for practical applications.

To be continued…

Returning to lists, what the lifting and categorical description above buys us is a way to formalize our intuitions about list operations, and to transfer procedures and patterns on simple types to lists. In SICP’s section on Sequences as conventional interfaces, you will find a hands-on, non-mathematical dissection of the basic building blocks into which any list operation can be decomposed: enumerations, accumulators, filters and maps. Our next step, following the bananas article i mentioned at the beginning, will be to use the language of category theory to provide a similar decomposition, but this time we will talk about catamorphisms, anamorphisms and similarly funny named mathematical beasts. What the new approach will buy us is the ability to generalize our findings beyond the list domain and onto the so-called algebraic types. But this will be the theme of a forthcoming post. Stay tunned.

Further reading

The best introductory text on Category Theory i’ve read is Conceptual Mathematics : A First Introduction to Categories by F. William Lawvere (one of the fathers of Category Theory) and Stephen Hoel Schanuel. It assumes no sophisticated mathematical background, yet it covers lots of ground. If you feel at home with maths, the best option is to learn from the horse’s mouth and get a copy of Categories for the Working Mathematician, by Mac Lane.

The books above do not deal with applications to Computer Science, though. For that, the canonical reference is Benjamin Pierce’s Basic Category Theory for Computer Scientists, but i find it too short and boring: a far better choice is, in my opinion, Barr and Well’s Category Theory and Computer Science. A reduced version of Barr and Well’s book is available online in the site for their course Introduction to Category Theory. They are also the authors of the freely available Toposes, Triples and Theories, which will teach you everything about monads, and then more. Marteen Fokkinga is the author of this 80-pages Gentle Introduction to Category Theory, with a stress on the calculational and algorithmical aspects of the theory. Unless you have a good mathematical background, you should probably take gentle with a bit of salt.

Let me close by mentioning a couple of fun applications of Category Theory, for those of you that know a bit about it. Haskell programmers will like this implementation of fold and unfold (as a literate program) using F-(co)algebras and applied to automata creation, while those of you with a soft spot for Physics may be interested in John Baez’s musings on Quantum Mechanics and Categories.

Tags: , , , ,

Follow

Get every new post delivered to your Inbox.

Join 42 other followers