Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

A Scheme bookshelf

It all began while i was using MDK‘s development as an excuse to learn everything i could about programming and programming tools: makefiles, autoconf, automake, localization, texinfo docs, lexers, parsers… and, eventually, extensibility. I kept hearing about an exotic thing by the name of GNU’s Ubiquitous Intelligent Language for Extensions, and, finally, rose to the bait. But my first attempts at writing Scheme felt awkward. Not because of the parens (i liked Scheme syntax from day zero), but basically because i was writing C code using Scheme. Clearly, i needed to read a bit about this new thing and, to that end, i got a copy of The Little Schemer… to no avail: i didn’t get what elephants, food and Socrates had to do with programming; i still felt awkward. (It was not the book’s fault, mind you, but mine. More about this in a bit.)

Then i found the book. Abelson and Sussman’s Structure and Interpretation of Computer Programs changed my life as a programmer. I’m sure you’ve read lots of praise for this book. Totally deserved! SICP is not really about Scheme, but about thinking about programming. Reading it will make you a better programmer in any language. It will expand your mind. It will cure your diseases. It’s really that good. Besides, SICP is freely available in HTML, PDF and texinfo formats, so you really have no excuse. For extra fun, get the accompanying video lectures, by Abelson and Sussman themselves. These lectures are charming: these guys have a passion for what they do, and they transmit it from the very beginning, when Abelson jokingly explains why computer science is neither a science nor about computers.

Concrete AbstractionsThe most cited alternative to SICP is How to Design Programs by Felleisen, Findler, Flatt and Krishnamurthi. Its authors have even published a rationale, The Structure and Interpretation of the Computer Science Curriculum, on why they think SICP is not well suited to teaching programming and how their book tries to fix the problems they’ve observed. I won’t try to argue against such eminent schemers, but, frankly, my feeling is that HtDP is a far, far cry from SICP. HtDP almost made me yawn, and there’s no magic to be seen. If for whatever reason SICP is not your cup of tea, i would recommend Max Hailperin‘s Concrete Abstractions as a far better (as in more fun than HtDP) alternative.

With SICP under my belt, i was ready for The Little Schemer and The Seasoned Schemer(by Daniel P. Friedman and Matthias Felleisen), which i read in a row. You’d be hard pressed to find any programming book similar to the Schemers books. As in my case, the initial reaction may well be of rejection: on a superficial reading they look too queer, even childish, and by any means the kind of book a serious programmer should read, except maybe for fun. But that’s precisely the point: you won’t learn unless it is for fun! If you enter the game proposed by Daniel and Matthias, accepting to participate in their Socratic scheme, you’ll learn a lot with their books. I found easier to put me in the right mood after SICP because it also is imbued of a kind of magic, and also because i was much better equipped to capture the subtleties lurking in every page of the Schemers books. Besides having a great time, one ends up with a sound grasp of higher order functions, recursion (including the Y combinator) and continuations. In fact, it was reading The Seasoned Schemer that something clicked and i really understood continuations.

After all these readings i felt i was starting to have a good grasp of the conceptual underpinnings of Scheme and Lisp, and that it was due time to strengthen the technical details. Or in other words, i wanted to come to grips with the nitty gritty details of Scheme syntax and semantics, including macros (which i knew from some tutorials and readings on Common Lisp). To that end, Dybvig‘s The Scheme Programming Language was the perfect book. Comprehensible, exacting and with an elegant prose, and to the point. But, despite its succinctness, Dybvig does an excellent job in conveying the details of the language and its usage. The book also includes extended usage examples for a good measure. Thus, TSPL works nicely as both a tutorial and a handy reference.

If you already have a good basis on functional programming you can use TSPL as your first Scheme book and skip the previous ones (but you will be skipping lots of fun and, maybe, insight too). And if you happen to read French, an interesting alternative for a speedier learning path is Jacques Chazarain’a Programmer avec Scheme. In the first part of the book, Practical programming with Scheme, Scheme is thoroughly presented, using it in many examples taken from the standard data structures and algorithms curriculum. Taking this practical basis as a starting point, Chazarain proceeds to a comprehensive (almost 400 pages long) discussion of the formal basis of the theory of programming languages: automata, lexers and parsers, propositional calculus, rewriting systems, logic programming, lambda calculus and more. As you can see, the scope of this book is amazing, and it’s well worth learning a little French!

The nice thing about learning a programming language in such a good company is that one learns much more than a single programming language. These books helped me obtain a conceptual basis for thinking about programming and programming languages in abstract terms, applicable to any programming problem. As Sussman colorfully demonstrates in one of the SICP lectures, one feels like beholding the heart of the spirit that lives in the machine. An that spirit, of course, is an interpreter. So i delved into the theory and practice of implementing interpreters guided by another classic: Essentials of Programming Languages by Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes. This marvelous book delves deep into the art of writing interpreters using Scheme as the implementation language. By way of running example, an interpreter for an ML-like language is developed all along, showing the magic of, among other things, register machines or continuation-passing style. To get a flavor of what this book has to offer, i wholeheartedly recommend reading Friedman’s The Role of the Study of Programming Languages in the Education of a Programmer, an insightful talk on how a theoretical background is useful (and even needed) to any programmer in any language, which Friedman illustrates with practical cases taken from his book.

Sussman the magicianOne of the rites of passage of any self-deserving schemer is writing a metacircular scheme interpreter, that is, a scheme implemented in itself. Having a look at the innards of your language implementation sheds a whole new light on your daily usage of the language. Sussman goes as far as wearing his magician’s hat when he explains the metacircular interpreter in the SICP lectures, and rightly so. But i’ve always felt a bit frustrated by the fact that the implementations one finds in books like SICP, EOPL or, to cite another classical example, PAIP are actually for toy interpreters not covering all the details of real Schemes. And often the left out precisely the most interesting bits! Fortunately, there’s a book that will tell you everything you ever wanted to know about implementing Lisp interpreters and compilers: Lisp in Small Pieces by Christian Queinnec. This is easily the best book about Lisp i have ever read. LiSP shows, without omitting any details, how the family of Lisp languages is implemented (in interpreted and compiled forms). You’ll learn from the inside the difference between Lisp-1 and Lisp-2, how dynamical and lexical scope work, how blocks and continuations are related, how on earth one implements… everything you ever wanted to know about Lisp implementations, and then more. On top of that, Queinnec writing style is engaging and the book is pervaded by a fine sense of humour that made me laugh out loud several times. If you’re serious about Lisp, you must read this book.

Let me close my list of recommendations with the last addition to my Scheme bookshelf, namely, the third schemer installment: The Reasoned Schemer (by Daniel P. Friedman, William E. Byrd, Oleg Kiselyov). Paraphrasing Alan Perlis, the key reason for learning new languages and paradigms is the fact that they, hopefully, change the way you think about programming. Functional languages do a pretty good job at that, but they’re not the only game in town. Declarative programming is definitely another mind blowing experience, which you can enjoy reading The Reasoned Schemer. The book presents, in that peculiar style of its predecessors that i like so much, the basis of logic programming using a Scheme implementation that, in many ways, surpasses standard declarative languages like Prolog. The best of two awesome worlds, and an excellent way to ensure that you keep learning and expanding your programming horizons.

Happy reading!

Seven rants about successful programming, give or take two

A friend of mine has sent me this list with the Seven Secrets of Successful Programmers and asked for comments. Halfway my reply, it’s occurred to me that i might as well post them here and, hopefully, hear about your opinion too.

First of all, i’m wary of magic recipes and fast-food solutions. There’s no such thing as a quick recipe to become a proficient programmer. It takes effort, a lot of hard work. No secret list will spare you that; as we all know, there’s no silver bullet. Not that we can’t use a little help from more experienced programmers and hear their advice, but short rules of thumb are often problematic. Software is, well, soft, and it’s very difficult to come up with eternal truths to be carved in stone. I prefer the whole story, as recounted for instance in books like The Practice of Programming, or extended guides like Norvig and Pitman’s classic tutorial on Lisp style.

Don’t take me wrong. I don’t think there’s anything evil in trying to convey hard won lessons in little capsules. Otherwise, i wouldn’t be writing these musings in the first place. It’s just that i dislike oversimplifications; they go hand in hand with the notion that programming is just something to be done by underpaid coding monkeys, or that it should be as easy as clicking around with a mouse.

But let me turn back to the original purpose of this post, which was, oh yes, commenting on those seven recipes to success. Here’s my take on some of them.

  1. Code for human consumption. Dead on. I haven’t much more to add here. As almost a corollary, using an as higher as possible level language is a non-brainer. This secret is nicely expressed in SICP’s preface:
    Programs must be written for people to read, and only incidentally for machines to execute.

    But of course, really grasping what that means takes reading SICP, at least.

  2. Comment often and comment well. Sorry, but this one is completely misguided, at least in my book. I’m not refering to the comment well bit (which, as an aside, is as vacuous as you can get), but even here the secret’s author gives a bad example:
    Good comment: Disable button to prevent its activation
    Bad comment: Set cmd = False

    The bad comment is obviously bad, but the proposed solution is far from good. Code is to be read by programmers, not by nincompoops, and we all know that a disabled button cannot be activated. The problem in the above example is not a bad comment, but very bad variable naming: use isActive instead of cmd and comments are unnecessary. As they should be. Which brings me to the “comment often” part, which the author reinforces with this incredibly bad piece of advice: comment your code so that it can be understood just reading the comments. Beg your pardon? What happened with our rule number one? (and no, comments are not code).

    In my experience, the need to explain what a snippet of code is doing with a comment is a sure indication of bad coding. Good code is self-explanatory and needs few, if any, comments. Otherwise, you did something wrong, be it bad naming or poor function breakdown. Not to mention the issue of keeping comments and code synchronized during rewrites. So here’s my secret number 2: comment seldom, write self-documenting code instead. This will make your programs not only easier to understand and maintain, but of better quality.

    Of course, this does not imply, by any means, that documentation is not needed. Proper documentation of your public interfaces and modules, how they work together and their dependencies is a must (this being, by the way, one of the things i dislike about extreme programming and other agile methodologies). But it does not belong into your implementation’s comments.

    There’s a lot to say about how to properly document your code, and often the language you’re using influences the documenting style (for instance, via its abstraction constructions (modules, interfaces, protocols, categories, whatever)). Although i won’t discuss this issue further, i’d recommend reading this very interesting discussion over at comp.lang.smalltalk, which incidentally demonstrates how difficult is to come up with cure-all rules of thumb.

  3. Layout code to increase legibility. But of course. I am however quite surprised that anyone can list such an absolutely basic thing as a secret to success. If your programmers are not able to write properly laid out code… well, you’d better forget about success in the first place. An important consideration in this regard, though, is that your tools should make this task trivial.

    As in many other things, Emacs excels in this regard, making indentation and code layout a breeze. I won’t try to convince you of using Emacs, but if your only quibbles about it come from using and learning it for less than a month, please consider giving it a serious try. These comments may help you realize what’s in store for you; and, if you love programming, chances are you’ll appreciate the beauty and power of its design. The learning curve is steep, but again, you know what i think of those looking for easy solutions ;-)

    Whatever your final decision (there’s nothing wrong in using VIM if you like it better), please learn to use your editor and exploit all it has to offer. It’s your basic tool and it should be customizable and extensible enough to exactly match your needs and make your life easy. Don’t write Lisp using Notepad.

  4. Expect the unexpected and deal with it. Absolutely. I don’t have any quibble here. Error handling may well be the most difficult task we face during program construction. Elegant code for the nominal case is, in comparison, a piece of cake. When you introduce error handling code, things get messy with extraordinary ease. During all these years, i’ve had a very hard time writing error handling code that looked elegant, not to mention beautiful. Actually, i think i’ve yet to find the right way to tackle exceptional situations. Non-resumable exception systems a la Java/C++ often have caused me more problems that simpler mechanisms (like return values and design by contract with assertions), probably due to their non-local nature. Much more usable are Common Lisp’s condition system and Smalltalk exceptions. Taking a look at them, even if you’re not using those languages, is a healthy exercise that may change the way you think about error handling.

    I won’t venture any further advice (for now) on this thorny issue, except for one recommendation: plan your error handling strategy in advance, for it is not something that can be added easily to your program as an afterthought.

  5. Name your variables to aid readability. Sure, and we’ve already seen how important this is for self-documenting your code. But i thought it was clear by now that Hungarian notation is evil. If you want good type checking, use a decent statically typed language like Haskell; with dynamically typed languages, well, using prefixes to mark the type of an identifier is non-sense. Even in languages like C or C++ there’s much to say against this notation, if only for the fact that it defeats the much more important goal of readability. Josh Fryman summarized long ago my feelings on Hungarian notation in his nice article on coding standards, by stating his
    Theorem One:
    Hungarian Notation exists to make up for bad coding practices.

    At any rate, the secrets author is very right to stress the importance of consistency and uniform conventions. In this regard, i’ve also found useful the practice of adding a bit of metadata to a variable name’s to indicate its scope (static vars in C/C++, members in C++, Java or Objective-C, special variables in Common Lisp…). In an ideal world, my programming environment would be aware of this kind of metadata and automatically provide some kind of visual clue, but we’re not there yet.

The last two secrets are an exhortation to use short functions (that do just one thing, and do it well) and lexical scope. Very good advice, without further qualification. I’ve found that functional languages, specially those in the Lisp family, and, more generally, having closures at your disposal naturally reinforce these desirable qualities of the software i write.

Summing up, my feeling is that, important as the above issues are, they only scratch the surface. If i had to give you a single rule in your path to become a good programmer, it would be to never stop learning. Read a lot, and then more. Be proficient with at least 3 or 4 languages and 2 or 3 paradigms. Have a good grasp of how their compilers or interpreters are implemented. And find your own rules.

Happy hacking!

Posted in Essays. 24 Comments »

My name’s jao, and i’m… i’m a programmer

Now, please somebody tell me that this is a (bad) joke, or a parody, or something. This person, excuse me, this developer, cannot be serious, or should i say, cannot possibly be so short-sighted, haughty and wrong. Unless… well, unless he’s Dilbert’s pointy hair boss. That would explain everything, including this other jewel in defense of MDD, which stands for, believe it or not, Meeting Driven Development. I promise! I’m not making this up! These pamphlets are awful in so many ways that i just can’t start criticizing them. I’ll just say that, Dilbertian parody or not, they’re a perfect illustration of what’s wrong in our profession, fellow programmers. And please, never ever keep a job that has stopped being fun. Users of your software (if not your managers) will be thankful.

As for the incredible MDD bit, fortunately, there’s still intelligent persons around. Pity they’re so hard to find.

Update: I’m told in the comments section that at least the MDD article is satire. As such, it’s excellent. I feel a bit silly, but relieved :-) .

Posted in Essays. 2 Comments »

Getting older

Last week at work was a tough one. I basically looked like this:

Think

Except that i’m not a girl and wear longer hair, the picture is quite accurate: nothing seemed to work and, of course, we had a deadline. To give you a bit more background, this is an embedded application controlling data gathering from scientific instruments to flight in a satellite called LISA Pathfinder (see this older post if you’re curious). Fortunately, my struggles seem to be over now , but i took some notes just after the fact, trying to capture the key bad practices that had conducted me to a near disaster. I thought it might be interesting to share some musings on my mistakes, if only to make sure i avoid them in the future. Here they are.

We have a special communications hardware in place, and i’ve been in charge of developing its drivers and protocol stack, implemented in the layered way we’ve been taught in our networking courses: hardware, data-link, transport and application layers.

Everything seemed to run smoothly with the first protocol version, but of course customers are never happy with first versions and we soon had a protocol redefinition, affecting the data-link and transport layers. And oh, the deadline: i just re-implemented the stack without doing any test. Yeah, i’m blushing, but i had no choice. Or had i? Well, i was following orders, as they say, and my boss had no choice either. I could have refused the orders, and made everybody up the ladder reconsider their plans. Problem is, those plans spring from promises to customers, and your boss is not going to reconsider anything unless you stubbornly (and probably wisely) insist in not doing a bad job. In which case you may get fired. But, if you can afford that risk, i think that’s the most honest and best thing to do. I didn’t do that. I have friends in this project, and leaving would jeopardize its completion. No way. Second best is to say yes, sir and test your software anyway. You’ll miss the deadline, but then, you’ll miss it anyway if you don’t write tests (unless you’re younger and wiser than i am). I didn’t do that, either. Blame on me. What i did was instead jump into the impossible task, knowing all too well that it wouldn’t work (which makes me feel even more stupid). What i did get right, for a change, was gloomily forewarning every one of the upcoming problems. If you want my advice, that’s the absolute minimum you should do in a situation like this one. When things go awry, it may save your, er, position.

So there i was, with my untested protocol stack exploding at the slightest contact with an externally sent message coming up our communications card. Remember that this is embedded software with no operating system below, so i had no ‘Access violation’ dialogs ready to take me into a debugger. Fortunately, we’re using design by contract tactics: all preconditions and some consistency predicates are duly asserted, and, whenever they fail, a black-and-white equivalent to the BSoD springs out the board’s serial port with a detailed report of the CPU’s states at the catastrophe’s time. That caught a few obvious mistakes (as had done in previous versions) and deserved the first entry in my lessons learned notebook: design by contract works for us.

After some fixes, the program was able to enter smoothly in its idle event loop, which was actually bad news: it was running too smoothly, to the point of simply ignoring any incoming network message. No assertion failure, no nothing. There began the odyssey: the only solution was to read my code and, slowly but surely, write all those missing unit tests. Second lesson: writing unit tests is far easier while developing the software under test that after the fact. And, more importantly, unit tests are you safety net for refactoring. In my experience, refactoring is an integral part of software development. Albeit i think that some design at the architectural level is needed before jumping to Emacs, it’s only i’ve written a couple of half-implementations of the desired functionality that the appropriate design comes out to the light. In my opinion, trying to write a detailed design out of the blue is a waste of time: bottom-up works nicely as a way to discover the design and fix the initial (and necessary) top-down architecture. I like to call this process bottom-top development.

So, i was soon re-implementing one of my modules, which, now that i looked at a second time, was unduly complex. This is also old hat: only after a first implementation (or, sometimes, a few of them) do i understand the problem well enough to write a simple one. Not that i’m discovering anything new here:

A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away. – Antoine de Saint-Exupery.

and, of course,

Plan to throw one away. You will do that, anyway. Your only choice is whether to try to sell the throwaway to customers. – Frederick Brooks.

Only that this time it was taking me more time than necessary: when you refactor, you want tests to tell you that every step in the process isn’t breaking anything. Obviously, i was breaking something, for my program wouldn’t work, refactoring or not. More tests, and more reading my code once and again. I knew i was overwriting memory somewhere in there (as it came out, i was doing that in more than one place), and the only problem was to find where. In the process, i jotted down a couple more observations on the sources of my troubles.

Everybody knows that appropriate naming of objects in a program is important. As i grow older, appropriate naming is growing more important to me. For instance, in this case i was debugging a piece of code dealing with (among other stuff) circular buffers. One of my buffer handling ADTs had a member recording the size of a buffer: i had called that variable size. That seems OK, except for the fact that we deal with two kinds of buffers, one containing bytes and one containg 16-bit words. I had a couple of nasty bugs that i was not detecting because, when reading the code, i was thinking of size as the number of bytes in the buffer. As it came out, word_number would have been a far better name! When you’re writing the code, it’s easy to remember the exact meaning of every variable, and sloppy naming seems not an issue. But when you come back to debug your code, it’s easy to misremember the meaning of a sloppy name. Third lesson re-learnt: appropriate naming is harder than it seems, and worthier.

Only by the end of my struggles did i use a debugger (in the development machine, we don’t have a way to connect a remote debugger to the deployment board). But it helped to nail down the final details. That made me think. When i started programming professionally, i was using Visual Studio 4 (or something) and did use the debugger a lot. As time passed and i learned and honed my skills, the debugger was less and less necessary. I found that understanding my programs and reasoning about their operation was more beneficial than relying in the easy way out of stepping through my code blindly. I remember how my co-workers use to make fun saying that i was running the program in my head when i was trying to find why a program of mine was failing. As a result, i had an excellent understanding of the code i wrote, and at first abandoning debuggers made me a better programmer. So, i’m a firm believer of the theory that says that excessive use of debuggers spoils programmers. But that does not mean that debuggers are always harmful. My mistake had been to forget that they exist at all! On the contrary, they’re one of the tools of the trade, and may be quite useful at times. As in this case, where i was having out-of-bounds accesses to memory due to a misplaced offset. I knew that somewhere i was overwriting memory, but it took quite a bit to find the culprit. A watchpoint in the debugger would have caught it in a breeze. So one more note-to-self: stop being an i-don’t-need-no-stinkin’-debugger macho programmer, and use the right tools in each situation.

Happily, my implementation seems to be working again, and the world, unsurprisingly, didn’t stop spinning due to yet another unmet deadline. After all the hassle, i know for sure that i don’t want to spend my time dealing with low-level details that a higher-level language and runtime can handle better than me (i’ve been feeling quite a bit like this guy), even in an embedded setting: next time, i will evaluate more carefully alternatives like Hedgehog or Tinyscheme. A more depressing thought is how basic my mistakes were: every single observation in this post should be self-evident to any seasoned programmer, and after almost ten years in the trenches i should have known better. Either i’m particularly stupid or we should be aware how much easier it is to preach than to actually live up to our own advice. All in all, i feel i’m getting older, and i’m not sure about the wiser bit.

Update: Talking about debuggers, here‘s an interesting article on how to get the most of macros in gdb. The article is a followup of a previous one on how gdb and strace work together.

Posted in Essays. 2 Comments »

Programmers go bananas

Introduction: lists galore

I learned programming backwards, plunging right on into C and, shortly after, C++ and Java from the very beginning. I was knee deep in complex data structures, pointers and abstruse template syntax in no time. And the more complex it all felt, the more i thought i was learning. Of course, i was clueless.

Reading SICP and learning about functional programming changed it all. There were many occasions for revelation and awe, but one of my most vivid recollections of that time is my gradual discovery of the power of simplicity. At about half way into SICP i realised in wonder that that beautiful and powerful world was apparently being constructed out of extremely simple pieces. Basically, everything was a list. Of course there were other important ingredients, like procedures as first-class objects, but the humble list was about the only data structure to be seen. After mulling on it for a little bit, i saw where lists draw their power from: recursion. As you know, lists are data types recursively defined: a list is either the empty list or an element (its head) followed by another list (its tail):

list = []
list = a : list

where i’m borrowing Haskell’s notation for the empty list ([]) and the list constructor (:), also known by lispers as () and cons. So that was the trick, i thought: lists have recursion built-in, so to speak, and once you’ve read a little bit about functional programming you don’t need to be sold on the power and beauty of recursive programs.

It is often the case that powerful and beautiful yet simple constructs have a solid mathematical foundation, and only when you grasp it do you really realize how powerful, beautiful and amazingly simple that innocent-looking construct is. Lists, and recursive operations on them, are an excellent case in point. But the path connecting them to their mathematical underpinnings is a long and winding one, which lays in the realm of Category Theory.

I first became acquainted of the relationship between categories and recursive programming reading Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer, Maarten Fokkinga and Ross Paterson. Albeit very enjoyable, this paper presupposes a high degree of mathematical sophistication on the reader’s side. I will try in this article to give you a simplified overview of the concepts involved, including Category Theory, its application to programming languages and what funny names like catamorphism, anamorphism or lambda-lifting have to do with your everyday list manipulations. Of course, i’ll be only scratching the surface: interspersed links and the Further reading section provide pointers to more in-depth explorations of this wonderland.

Categorical interlude

CategoriesCategories are (relatively) simple constructs. A category consists of a set O of objects, and a set A of arrows between elements of O. Arrows are composable: if there’s an arrow from a to b, and another one from b to c, there must be an arrow from a to c (where a, b and c are elements of O). Besides, they are associative: if you have arrows from a to b, b to c, and c to d, you can go from a to d via two different paths, namely, first from a to c and then from c to d, or first from a to b and then from b to d. Finally, for each element a in O there’s an identity arrow which goes from a to itself (called an identity), such that following this arrow changes nothing. These properties are better visualized with a diagram (or a bit of mathematical notation), as shown in the image on the right.

A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the ‘arrows’ are provided by the relation “less or equal” (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b). What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock’s trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter‘s ideas).

To boot, our definition of a category already contains the concept of isomorphic objects. Think of an arrow from a to b as an operation that transforms a in b. An arrow from b to a will make the inverse transformation. If composing both transformations gives you the identity, you are back to the very same object a, and we say that a and b are isomorphic: you can transform one into the other and back at will. In a deep sense, this concept captures a generic way of expressing equality that pervades all maths: if you’re not afraid of a little bit of maths, Barry Mazur‘s essay When is a thing equal to some other thing? is an excellent introduction to Category Theory with an emphasis in the concept of equality. Among many other things, you will learn how the familiar natural numbers can be understood as a category, or how an object is completely defined by the set of its transformations (and, therefore, how to actually get rid of objects and talk only of transformations; i know this is stretching and mixing metaphors (if not plain silly), but this stress in arrows, as opposed to objects, reminded me of Alan Kay’s insistence on focusing on messages rather than objects). Another introductory article with emphasis on categories as a means to capture sameness is R. Brown and T. Porter’s Category Theory: an abstract setting for analogy and comparison.

Not only objects inside a category can be transformed into each other. We reveal the common structure of two disjoint categories by means of a functor mapping across two categories. A functor consists of two functions: one that maps each object of the first category to an object in the second, and another one putting in correspondence arrows in one category with arrows in the second. Besides, these functions must preserve arrow composition. Let me spell this mathematically. Consider to categories, C and C’ with object sets O and O’ and arrow sets A and A’. A functor F mapping C to C’ will consist then of two functions (Fo, Fa); the first one taking elements of O to elements of O’:

Fo: O -> O’

Fo(a) in O’ for every a in O

and the second one taking arrows from A to arrows in A’:

Fa: A -> A’

Fa(f) in A’ for every f in A

and such that, if f is an arrow from a to b in C, Fa(f) is an arrow from Fo(a) to Fo(b) in C’. Moreover, we want that following arrows in C is ‘analogous’ to following them in C’, i.e., we demand that

Fa(fg) = Fa(f)Fa(g)

In the left hand side above, we are composing two arrows in C and then going to C’, while in the right hand side we first take each arrow to C’ and, afterwards, compose them in there. If C and C’ have the same structure, these two operations must be equivalent. Finally, F must preserve identities: if i is the identity arrow for an element a in O, Fa(i)must be the identity arrow for Fo(a) in O’. The diagram on the left shows a partial graph (i’m not drawing the identity arrows and their mappings) of a simple functor between two categories, and ways of going from an object a in the first category to an object x in the second one which are equivalent thanks to the functor’s properties.

As you can see in this simple example, the functor gives us the ability of seeing the first category as a part of the second one. You get a category isomorphism in the same way as between objects, i.e., by demanding the existence of a second functor from C’ to C (you can convince yourself that such a functor does not exist in our example, and, therefore, that the two categories in the diagram are not isomorphic).

You have probably guessed by now one nifty property of functors: they let us going meta and define a category whose objects are categories and whose arrows are functors. Actually, Eilenberg and MacLane‘s seminal paper General theory of natural transformations used functors and categories of categories to introduce for the first time categories (natural transformations are structure-preserving maps between functors: this Wikipedia article gives an excellent overview on them).

But enough maths for now: it is high time to show you how this rather abstract concepts find their place in our main interest, programming languages.

Categories and programming languages

About the only similarity between C and Haskell programming is that one spends a lot of time typing ASCII arrows. But of course, Haskell’s are much more interesting: you use them to declare the type of a function, as in

floor:: Real -> Int

The above stanza declares a function that takes an argument of type real and returns an integer. In general, a function taking a single argument is declared in Haskell following the pattern

fun:: a -> b

where a and b are types. Does this ring a bell? Sure it does: if we identify Haskell’s arrows with categorical ones, the language types could be the objects of a category. As we have seen, we need identities

id:: a -> a
id x = x

and arrow composition, which in Haskell is denoted by a dot

f:: b -> c
g:: a -> b
fg:: a -> b -> c
fg = f . g

Besides, associativity of arrow composition is ensured by Haskell’s referential transparency (no side-effects: if you preserve referential transparency by writing side-effect free functions, it won’t matter the order in which you call them): we’ve got our category. Of course, you don’t need Haskell, or a statically typed language for that matter: any strongly typed programming language can be modelled as a category, using as objects its types and as arrows its functions of arity one. It just happens that Haskell’s syntax is particularly convenient, but one can define function composition easily in any decent language; for instance in Scheme one would have

(define (compose f g) (lambda (x) (f (g x)))

Functions with more than one arguments can be taken into the picture by means of currying: instead of writing a function of, say, 2 arguments:

(define (add x y) (+ x y))
(add 3 4)

you define a function which takes one argument (x) and returns a function which, in turn, takes one argument (y) and returns the final result:

(define (add x) (lambda (y) (+ x y)))
((add 3) 4)

Again, Haskell offers a pretty convenient syntax. In Haskell, you can define add as:

add x y = x + y

which gets assigned, when applied to integers, the following type:

add:: Int -> (Int -> Int)

that is, add is not a function from pairs of integers to integers, but a function that takes an integer and returns a function of type Int -> Int. Finally, we can also deal with functions taking no arguments and constant values by introducing a special type, called unit or 1 (or void in C-ish), which has a unique value (spelled () in Haskell). Constants of our language (as, e.g., True or 43.23) are then represented by arrows from 1 to the constant’s type; for instance, True is an 1 -> Boolean arrow. The unit type is an example of what in category theory is known as a terminal object.

Now that we have successfully modelled our (functional) programming language as a category (call it C), we can use the tools of the theory to explore and reason about the language constructs and properties. For instance, functors will let me recover the original motivation of this post and explore lists and functions on them from the point of view of category theory. If our language provides the ability to create lists, its category will contain objects (types) of the ‘list of’ kind; e.g. [Int] for lists of integers, [Boolean] for lists of Booleans and so on. In fact, we can construct a new sub-category CL by considering list types as its objects and functions taking and returning lists as its arrows. For each type a we have a way of constructing a type, [a] in the sub-category, i.e., we have a map from objects in C to objects in CL. That’s already half a functor: to complete it we need a map from functions in C to functions in CL. In other words, we need a way to transform a function acting on values of a given type to a function acting on lists of values of the same type. Using the notation of the previous section:

Fo(a) = [a]
Fa(f: a -> b) = f': [a] -> [b]

Fa is better known as map in most programming languages. We call the process of going from f to f' lifting (not to be confused with a related, but not identical, process known as lambda lifting), and it’s usually pretty easy to write an operator that lifts a function to a new one in CL: for instance in Scheme we would write:

(define (lift f) (lambda (lst) (map f lst)))

and for lift to truly define a functor we need that it behaves well respect to function composition:

(lift (compose f g)) = (compose (lift f) (lift g))

We can convince ourselves that this property actually holds by means of a simple example. Consider the function next which takes an integer to its successor; its lifting (lift next) will map a list of integers to a list of their successors. We can also define prev and (lift prev) mapping (lists of) integers to (lists of) their predecessors. (compose next prev) is just the identity, and, therefore, (lift (compose next prev)) is the identity too (with lifted signature). But we obtain the same function if we compose (lift next) and (lift prev) in CL, right? As before, there’s nothing specific to Scheme in this discussion. Haskell even has a Functor type class capturing these ideas. The class defines a generic lift operation, called fmap that, actually, generalizes our list lifting to arbitrary type constructors:

fmap :: (a -> b) -> (f a -> f b)

where f a is the new type constructed from a. In our previous discussion, f a = [a], but if your language gives you a way of constructing, say, tuples, you can lift functions on given types to functions on tuples of those types, and repeat the process with any other type constructor at your disposal. The only condition to name it a functor, is that identities are mapped to identities and composition is preserved:

fmap id = id
fmap (p . q) = (fmap p) . (fmap q)

I won’t cover usage of type constructors (and their associated functors) other than lists, but just mention a couple of them: monads, another paradigmatic one beautifully (and categorically) discussed by Stefan Klinger in his Programmer’s Guide to the IO Monad – Don’t Panic (also discussed at LtU), and the creation of a dance and music library, for those of you looking for practical applications.

To be continued…

Returning to lists, what the lifting and categorical description above buys us is a way to formalize our intuitions about list operations, and to transfer procedures and patterns on simple types to lists. In SICP’s section on Sequences as conventional interfaces, you will find a hands-on, non-mathematical dissection of the basic building blocks into which any list operation can be decomposed: enumerations, accumulators, filters and maps. Our next step, following the bananas article i mentioned at the beginning, will be to use the language of category theory to provide a similar decomposition, but this time we will talk about catamorphisms, anamorphisms and similarly funny named mathematical beasts. What the new approach will buy us is the ability to generalize our findings beyond the list domain and onto the so-called algebraic types. But this will be the theme of a forthcoming post. Stay tunned.

Further reading

The best introductory text on Category Theory i’ve read is Conceptual Mathematics : A First Introduction to Categories by F. William Lawvere (one of the fathers of Category Theory) and Stephen Hoel Schanuel. It assumes no sophisticated mathematical background, yet it covers lots of ground. If you feel at home with maths, the best option is to learn from the horse’s mouth and get a copy of Categories for the Working Mathematician, by Mac Lane.

The books above do not deal with applications to Computer Science, though. For that, the canonical reference is Benjamin Pierce’s Basic Category Theory for Computer Scientists, but i find it too short and boring: a far better choice is, in my opinion, Barr and Well’s Category Theory and Computer Science. A reduced version of Barr and Well’s book is available online in the site for their course Introduction to Category Theory. They are also the authors of the freely available Toposes, Triples and Theories, which will teach you everything about monads, and then more. Marteen Fokkinga is the author of this 80-pages Gentle Introduction to Category Theory, with a stress on the calculational and algorithmical aspects of the theory. Unless you have a good mathematical background, you should probably take gentle with a bit of salt.

Let me close by mentioning a couple of fun applications of Category Theory, for those of you that know a bit about it. Haskell programmers will like this implementation of fold and unfold (as a literate program) using F-(co)algebras and applied to automata creation, while those of you with a soft spot for Physics may be interested in John Baez’s musings on Quantum Mechanics and Categories.

Tags: , , , ,

Continuation kata

If you have a little Scheme under your belt, just plunge into the challenge right now:

We want to find three integers, x y and z such that both are in the range [2,9] and that they are the lengths of the sides of a right triangle (x^2 = y^2 + z^2). The challenge is to write two procedures, in-range and fail such that the solution to the problem looks like this:

(let ((x (in-range 2 9))
      (y (in-range 2 9))
      (z (in-range 2 9)))
  (if (= (* x x) (+ (* y y) (* z z)))
      (list x y z)
      (fail "no solution")))

In other words, we are after an elegant way of implementing search backtracking: the possible values of the (x, y, z) triplets must be checked in turn, until a solution is found or we exhaust the search space. This is the kind of problem that declarative languages like Prolog solve in a breeze. Scheme does not fare bad, either: the right solution is just a screenful long. Keep on reading if you feel like getting some hints.

I’ve stolen this little exercise from an excellent talk by Marc Feeley on writing a Scheme to C compiler. The example pops up when Marc is discussing what makes such a compiler tricky (that is, what Scheme features are missing in C): tail calls, garbage collection, closures and first class continuations. So here goes the first hint: the solution uses call/cc.

Continuations 101

I couldn’t put a finger on it, but there’s something pesky about continuations: in my experience, they’re hard to grasp at first. And yet, the concept is quite simple: whenever you are evaluating an expression E, there’s someone waiting to do something with the value of E; for instance, in the process of evaluating:

(+ 3 (* 5 4))

if you take E to be (* 5 4), the interpreter is waiting for your result to add 3… this thing to be done to the result of evaluating a (sub)expression can be, quite naturally, represented by a procedure that takes a single argument; in our example, this procedure could be:

(lambda (v) (+ 3 v))

or, if you are evaluating the whole thing on a REPL toplevel that will print the result, maybe something roughly equivalent to:

(lambda (v) (print (+ 3 v)))

It is this (usually invisible) procedure what we call the current continuation. Every expression has a continuation, but in many languages it is only implicit. Scheme gives you the possibility of accessing it. If you write:

(+ 3 (call/cc (lambda (cont) (* 5 4))))

that could be translated as

(+ 3
   (let ((cont (lambda (v) (+ 3 v)))
      (* 5 4)))

Of course, things get only interesting when you use cont; for instance, it is hopefully clear that

(+ 3 (call/cc (lambda (cont) (* 5 4) (cont 18))))

evaluates to 21. Besides writing silly examples and bore your readers to tears, you can put this stuff to good use in a variety of ways. The most immediate is to early scape from a long computation. This procedure multiplies all the elements in a given list:

(define (mult lon)
  (cond ((null? lon) 1)
        ((zero? (car lon)) 0)
        (else (* (car lon) (mult (cdr lon))))))

but it will do a lot of useless work when lon contains a 0. First class continuations allow a better way:

(define (mult-k lon cont)
  (cond ((null? lon) 1)
        ((zero? (car lon)) (cont 0))
        (else (* (car lon) (mult-k (cdr lon) cont)))))

(define (mult lon) (call/cc (lambda (cont) (mult-k lon cont))))

If you understand why this is better than the previous version, you’re on your way to understanding continuations as non-local exits. And if by now you’re thinking that continuations are not, after all, a big deal, quick tell me what

(((call/cc (lambda (k) k)) (lambda (x) x)) "HEY!")

evaluates to, and why. By the way, this nice micro-kata is from TSPL’s section on continuations, which provides a nice, if brief, introduction to call/cc, including a description of another typical use case: the implementation of threading via coroutines. A more extended (but still not too long) discussion of the very same issues can be found in Dann Friedman’s beautiful paper Applications of Continuations.

Finally, if you have a little time in your hands, read through this interesting ll1 mail thread, where guys like Mathias Felleisen, Avi Bryant or Paul Graham have their say on continuations and their uses.

Backtracking

Since coroutines are not needed to solve our original kata, let me gloss over them and jump to the next typical use of continuations: backtracking. A key thing to remember about call/cc is that the continuation passed to the lambda form is a first class value. Among other things, that means that you can store it and use it in a totally unrelated context. Or, in other words, Scheme introduces time-travel in your toolset, with its associated wonders (as in Schelog, a Prolog implemented in Scheme) and headaches.

Let’s see how backtracking can be implemented with the aid of persistent continuations using an example adapted from Jacques Chazarain’s book Programmer avec Scheme (which, by the way, makes for an excellent second book on Scheme, provided you read French). Writing a procedure that looks for the first occurrence of an element in a list that satisfies a given predicate is a piece of cake:

(define (search lst p?)
  (cond ((null? lst) #f)
        ((pair? lst) (or (search (car lst) p?)
                         (search (cdr lst) p?)))
        ((p? lst) lst)
        (else #f)))

where we accept list of lists too. But what if i want to get all findings one by one? I’d like to have a solution generator that returns a procedure returning, in successive calls, each of the occurrences of a solution in the given list. That is, we want to be able to write code like this one:

(define solutions
  (search-generator '(0 ((1 a) 2) b (b c) (((6)))) number?)
(solutions) => 0
(solutions) => 1
(solutions) => 2
(solutions) => 6
(solutions) => 'done

Persistent continuations allow a very clean implementation of search-generator. The strategy is to start the search, and each time we find an element in the list satisfying the predicate store the current continuation (which will keep on searching for more solutions) for later invocations. We then return a procedure that calls the continuation and stores a new one which will resume the search with the rest of the list. You can have a little fun trying to find the solution before reading it below. Or, if you get stuck, to read Ferguson and Duego’s excellent Call with Current Continuation Patterns, where you’ll find examples of call/cc in action.
Got your answer? Well, let’s compare:

(define (search-generator lst p?)
  (let ((success '?)) ;; placeholder for the current continuation
    (letrec ((cont-success ;; next continuation
              (lambda (x) (search lst)))
             (search
              (lambda (elem)
                (cond ((null? elem) 'done)
                      ((pair? elem) (search (car elem))
                                    (search (cdr elem)))
                      ((p? elem) (call/cc
                               (lambda (k) ;; next search will continue from here
                                 (set! cont-success k)
                                 (success elem))))
                      (else 'done)))))
      (lambda () (call/cc (lambda (k)
                       (set! success k)
                       (cont-success 'done)))))))

Once you grasp how this works, you have all the ingredients to solve our original kata.

Not only Scheme

There are other languages with support for first class continuations, SML/NJ and Ruby being the most salient cases. Besides, Seaside implements them for our beloved Smalltalk.

You don’t need call/cc to play some of the tricks discussed above. For instance, one can implement backtracking in Python (the linked article contains also an intro to continuations) or Standard SML (ditto, also with examples in Pascal, and, of course, SML), using a technique known as Continuation Passing Style, or CPS, that consist on passing explicitly the continuation to each of your functions. Explaining CPS would take another article, so, for now, i will let you explore it by yourself, but i’ll mention that armed with CPS you can, essentially, play all the call/cc tricks. A few years ago, type theorist extraordinaire Benjamin C. Pierce asked in the Caml mailing list about cool CPS usages, and took the time to summarize the responses he got. It contains pointers to many mind-bending readings.

The solution

I almost forgot: if you give up finding our kata’s solution or want to check it out, you’ll find it in Marc Feeley’s talk for the Montreal Scheme/Lisp Users Group. As i’ve mentioned, it deals with the implementation of a Scheme to C compiler (Marc is the lead developer of Gambit-C), which is based on CPS. The site contains links to the talk slides and videos, and a beautiful pure-scheme implementation of the compiler. Enjoy.

Tags: , , , ,

Beyond mainstream object-oriented programming

Introduction

After a few scheming years, i had come to view objects as little more than poor-man closures. Rolling a simple (or not so simple) object system in scheme is almost a textbook exercise. Once you’ve got statically scoped, first-order procedures, you don’t need no built-in objects. That said, it is not that object-oriented programming is not useful; at least in my case, i find myself often implementing applications in terms of a collection of procedures acting on requisite data structures. But, if we restrict ourselves to single-dispatch object oriented languages, i saw little reason to use any of them instead of my beloved Scheme.

Things started to change recently due to my discovering the pleasures of Smalltalk. First and foremost, it offers a truly empowering integrated ambient to live and develop in. Second, if you’re going to use objects, using the simplest, cleanest syntax will not hurt. Add to that some reading on the beautiful design principles underlying Smalltalk, and one begins to wonder if closures aren’t, in fact, poor-man objects–or at least i do, whenever i fall in an object-oriented mood (i guess i’m yet not ready to reach satori).

But Scheme is not precisely an ugly or bad designed language, so i needed some other reason to switch language gears for my OO programming. I knew there’s more than encapsulation or subtype polymorphism in object-land from my readings on CLOS (the Common Lisp Object System), or on Haskell’s type classes (and its built-in parametric polymorphism), but i was after something retaining Smalltalk’s elegance. And then i remembered that, when i was a regular lurker in the Tunes project‘s mailing lists and IRC channel, a couple of smart guys were implementing an OO language whose syntax was smalltalkish. That language (which, if memory servers, started life with the fun name who me?) has evolved during the last few years into a quite usable programming environment named Slate, started by Lee Salzman and currently developed and maintained by Brian Rice.

I’ve been reading about Slate during the last few days, and decided to learn it. What motivated me was discovering how Slate goes beyond mainstream object-oriented programming by incorporating well-known (but hardly used) and really powerful paradigms. In short, Slate improves Smalltalk’s single-dispatch model by introducing and combining two apparently incompatible technologies: multiple dispatch and prototype-based programming. To understand the whys and hows of Slate, there’s hardly a better way than reading Lee Salzman’s Prototypes with Multiple Dispatch. The following discussion is, basically, an elaboration of Lee’s explanation on the limitations of mainstream OO languages, and how to avoid them with the aid of PMD.

(Note: click on the diagrams to enlarge them, or, if you prefer, grab a PDF of the whole set.)

Fishes and sharks

Click to enlargeLet’s start by showing why on earth would you need anything beyond Smalltalk’s object system (or any of its modern copycats). Consider a simple oceanographic ecosystem analyser, which deals with (aquatic) Animals, Fishes and Sharks. These are excellent candidates for class definitions, related by inheritance. Moreover, we are after modeling those beasts’ behaviours and, in particular, their reactions when they encounter each other: each time a Shark meets a Fish of other species, the Shark will swallow the other Fish, while when a Shark meets Shark they will fight. As a result of such fighting, Sharks get unhealthy, which regrettably complicates matters: wound sharks won’t try to eat other fishes, and will swim away other sharks instead of fighting them. The image on the left provides a sketchy representation of the code we need to model our zoo. Waters are quickly getting muddled implementation-wise.

On the one hand, subtype polymorphism based just on the object receiving the encounter message: we need, in addition, to take into account the argument’s concrete type to implement the desired behaviour. This is a well-known issue in single-dispatch languages, whose cure is, of course, going to multiple dispatching (see below). In particular, we want to avoid the need to modify existing classes whenever our hierarchy is extended.

On the second hand, varying state (exemplified here by the Shark’s isHealthy instance variable complicates the implementation logic. As we will see, prototype-based languages offer a way to factor out this additional complexity.

Beyond single-dispatch

The need to adjust behaviour on the basis of the type of both a message receiver and its arguments arises frequently in practice. So frequently, that a standard way of dealing with it has been christened as the Visitor design pattern. The technique, also known as double-dispatch, is well known: you can see, for instance, how it’s applied to arithmetic expressions in Smalltalk, or read about a generic implementation of multimethods in Python (which also includes a basically language-independent discussion on the issues at hand). If you happen to be a C++ programmer, you may be tempted to think that global functions and overloading solve the problem in that language. Well, think twice: a proper implementation of multiple dispatch in C++ needs of RTTI and templates, as shown in this article.

Click to enlargeCLOS and Dylan are two examples of languages solving the issue from the onset by including support for multi-methods. The idea is to separate methods from classes (which only contain data slots). As shown in the pseudo-code of the accompanying figure, methods are defined as independent functions with the same name, but differing in their arguments’ types (in CLOS, a set of such methods is called a generic function). When a generic function is called, the system selects the actual method to be invoked using the types of all the arguments used in the invocation. The encounter generic function in our running example provides a typical example, as shown in the figure on the right. The benefits of having multi-methods at our disposal are apparent: the code is simpler and, notably, adding new behaviours and classes to the system does not need modifications of existing code. For instance, we can introduce a Piranha, which eats unhealthy sharks instead of swimming away from them, by defining the requisite class and methods, without any modification whatsoever to the already defined ones.

On the downside, we have still to deal with the complications associated with internal state. Enter the magic world of prototype-based systems.

The ultimate dynamic

If you like dynamic languages, chances are you’ll find prototype-based system an almost perfect development environment. Prototype-based languages emerged as an evolution of Smalltalk with the invention of Self by David Ungar and Randall B. Smith during the late eighties. The key idea behind Self is noticing that, most of the time, class definitions needlessly coerce and complicate your object model.

A class definition becomes a contract to be satisfied by any instance, and it is all too easy to miss future or particular needs of your objects (class-based inheritance is just a partial solution to this problem, as shown, for instance, by the so-called fragile base class problem). But, if you look around you, objects change in internal behaviour and data content continously, and our attempts at distilling their Platonic nature are often in vain.

In prototype-based programming, instead of providing a plan for constructing objects, you simply clone existing instances and modify their behaviour by directly changing the new instance’s slots (which provide uniform access to methods and state). New clones contain a pointer to their parent, from which they inherit non-modified slots: there is no way to access state other than via messages sent to instances, which simplifies tackling with state.

Class-based languages oblige you to keep two relationships in mind to characterize object instances: the “is-a” relationship of the object with its class, and the “kind-of” relationship of that class with its parent. In self, inheritance (or behaviour delegation) is the only one needed. As you can see, Self is all about making working with objects as simple as possible. No wonder Ungar and Smith’s seminal paper was titled Self: The Power of Simplicity. Needless to say, a must read.

Click to enlargeThe figure on the left shows how our running example would look in selfish pseudo-code. As promised, state is no longer surfacing in our method implementation’s logic. Unfortunately, we have lost the benefits of multi-methods in the process. But fear not, for, as we will see, you can eat your cake and have it too. Instead of pseudo-code, you can use Self itself, provided you are the happy owner of a Mac or a Sun workstation. Or you can spend 20 fun minutes seeing the Self video, which features the graphical environment accompanying the system. Like Smalltalk, Self provides you with a computing environment where objects are created, by cloning, and interact with you. The system is as organic and incremental as one can possibly get.

Of course, you’re not limited to Self. For instance, Ken Dickey fleshed up Norman Adams’ saying that objects are poor man closure’s by offering a prototype-based object system in Scheme, and, more recently, Neil Van Dyke has released Protobj. And you have probably already used a very popular language in the family: Javascript. The list goes on, albeit, unfortunately, many of these languages lack either Self’s nice integrated environment, or a portable, up-to-date implementation. Slate to the rescue.

The best of both worlds

Prototyping and multiple dispatch are, at first sight, at odds. After all, method dispatching based on arguments’ type needs, well, a type for each argument, doesn’t it? As it happens, Lee Salzman and Brian Rice have envisioned a way of combining the power of both paradigms into Slate. In fact, proving how this is possible is the crux of Lee’s article. In addition, Slate aims at providing a complete development environment in the vein of Smalltalk or Self. Too good to be true? In future installments of this blog category, we’ll see how and why it’s true, but, if you cannot wait, just run-not-walk to Slate’s site. You’ll have a great time.

Tags: , , , , , ,

Programmers love writing (and mocking) tests

Aggressive unit testing (also known, by those willing to earn some easy money, by the buzzword test-driven development) is about the only practice from XP that i embraced happily in my days as a dot com employee (it was around 2000, and i was working for one of those companies that got lots of funding from avid but hopelessly candid investors to do… well, to do basically nothing).

Those were my long gone Java days, and i got test infected all the way down. That writing tests is a good practice is hardly news for any decent programmer, but what i specially liked about putting the stress on writing lots of tests was discovering how conductive the practice is for continuous code rewriting (count that as the second, and last, extreme practice i like). I had yet to discover the real pleasures of bottom-up development in REPL-enabled languages, but even in Java my modus operandi consisted basically in starting with concrete code (sometimes even a mere implementation detail) and make the application grow up from there. Of course, some brainstorming and off-the-envelop diagramming was involved, but the real design, in my experience, only appears after fighting for a while with real code. The first shot is never the right one, nor the second, for that matter. The correct design surfaces gradually, and i know i’ve got it right when unexpected extensions to the initially planned functionality just fit in smoothly, as if they had been foresighted (as an aside, i feel the same about good maths: everything finds its place in a natural way). When you work like that, you’re always rewriting code, and having unit tests at hand provides a reassuring feeling of not being throwing the baby with the bath during your rewriting frenzies.

Of course, there’s also a buzzword-compliant name for such rewritings (refactoring), and you can expend a few bucks to read some trivialities about all that. (Mind you, the two books i just despised have been widely acclaimed, even by people whose opinion i respect, so maybe i’m being unfair here—in my defense, i must say i’ve read (and paid) both of them, so, at least, my opinion has cost me money.)

Sunit in SqueakAnyway, books or not, the idea behind the JUnit movement is quite simple: write tests for every bit of working code you have, or, if you’re to stand by the TDD tenets (which i tend not to do), for every bit of code you plan to write. As is often the case, the Java guys where not inventing something really new: their libraries are a rip-off of the framework proposed by Kent Beck for Smalltalk. Under the SUnit moniker, you’ll find it in every single Smalltalk implementation these days. A key ingredient to these frameworks’ success is, from my point of view, their simplicity: you have a base test class from which to inherit basic functionality, and extend it to provide testing methods. Languages with a minimum of reflection power will discover and invoke those methods for you. Add some form of test runner, and childish talk about an always green bar, and you’ve got it. The screenshot on the left shows the new SUnit Test Runner in one of my Squeak 3.9 images, but you’ll get a better glimpse of how writing unit tests in Squeak feels like by seeing this, or this, or even this video from Stéphane Ducasse‘s collection.

Of course, you don’t need to use an object-oriented language to have a unit testing framework. Functional languages like Lisp provide even simpler alternatives: you get rid of base classes, exchanging them by a set of testing procedures. The key feature is not a graphical test runner (which, as any graphical tool, gets in the way of unattended execution: think of running your tests suites as part of your daily build), but a simple, as in as minimal as possible, library providing the excuse to write your tests. Test frameworks are not rocket science, and you can buy one as cheap as it gets: for instance, in C, i’m fond of MinUnit, a mere three-liner:

/* file: minunit.h */
#define mu_assert(message, test) do { if (!(test)) return message;  \
                                    } while (0)

#define mu_run_test(test) do { char *message = test(); tests_run++; \
                               if (message) return message; } while (0)

extern int tests_run;

(Let me mention in passing, for all of you non-minimalistic C aficionados, the latest and greatest (?) in C unit testing: libtap) Add to this a couple of Emacs skeletons and an appropriate script and you’re well in your way towards automated unit testing. From here, you can get fancier and add support for test suites, reporting in a variety of formats, and so on; but, in my experience, these facilities are, at best, trivial to implement and, at worst, of dubious utility. It’s the quality and exhaustiveness of the tests you write what matters.

Lisp languages have many frameworks available. The nice guys of the CL Gardeners project have compiled a commented list of unit testing libraries for Common Lisp. In Scheme you get (of course) as many testing frameworks as implementations. Peter Keller has written an R5RS compliant library that you can steal from Chicken. Noel Welsh’s SchemeUnit comes embedded into PLT, and the Emacs templates are already written for you (or, if you mileage varies and you’re fond of DrScheme, you can have a stylized version of the green bar too). Personally, i don’t use PLT, and find Peter’s implementation a bit cumbersome (meaning: too many features that i don’t use and clutter the interface). Thus, my recommendation goes to Neil van Dyke of quack‘s fame’s Testeez. Testeez is an R5RS (i.e., portable), lightweight framework that is as simple as possible. Actually, it’s simpler than possible, at least in my book. In my opinion, when a test succeeds it should write nothing to the standard (or error) output. Just like the old good unix tools do. I only want verbosity when things go awry; otherwise, i have better things to read (this kind of behaviour also makes writing automation and reporting scripts easier). So, as a matter of fact, I use a hacked version of Testeez which has customizable verbosity levels. It’s the same version that we use in Spells, and you can get it here. Also of interest are Per Bothner’s SRFI-64, A Scheme API for test suites and Sebastian Egner’s SRFI-78, Lightweight testing (both including reference implementations).

Lisp testing frameworks abound for a reason: they’re extremely useful, yet easy to implement. As a consequence, they’re good candidates as non-trivial learning projects. A nice example can be found in Peter Seibel’s Practical Common Lisp (your next book if you’re interested in Common Lisp), which introduces macro programming by implementing a decent testing library. In the Scheme camp, Noel discusses the ups and downs of DSL creation in an article describing, among other things, the SchemeUnit implementation. A worth reading, even for non-beginners.

Once you settle on a test framework and start writing unit tests, it’s only a question of (short) time before you’re faced with an interesting problem, namely, to really write unit tests. That is, you’re interested in testing your functions or classes in isolation, without relying on the correctness of other modules you’ve written. But of course, your code under test will use other modules, and you’ll have to write stubs: fake implementations of those external procedures that return pre-cooked results. In Lisp languages, which allow easy procedure re-definition, it’s usually easy to be done with that. People get fancier, though, specially in object-oriented, dynamic languages, by using mock objects. The subject has spawn its own literature and, although i tend to think they’re unduly complicating a simple problem, reading a bit about mockology may serve you to discover the kind of things that can be done when one has a reflective run-time available. Smalltalk is, again, a case in point, as Sean Mallory shows in his stunningly simple implementation of Mock Objects. Tim Mackinnon gets fancier with his SMock library, and has coauthored a very interesting article entitled Mock Roles, Not Objects, where mock objects are defined and refined:

a technique for identifying types in a system based on the roles that objects play. In [9] we introduced the concept of Mock Objects as a technique to support Test-Driven Development. We stated that it encouraged better structured tests and, more importantly, improved domain code by preserving encapsulation, reducing dependencies and clarifying the interactions between classes. [...] we have refined and adjusted the technique based on our experience since then. In particular, we now understand that the most important benefit of Mock Objects is what we originally called interface discovery [...]

An accompanying flash demo shows SMock in action inside Dolphin Smalltalk. The demo is very well done and i really recommend taking a look at it, not only to learn to use Mock Objects, but also as a new example of the kind of magic tricks allowed by Smalltalk environments. Albeit not as fanciful, Objective-C offers good reflective features, which are nicely exploited in OCMock, a library which, besides taking advantage of Objective-C’s dynamic nature, makes use of the trampoline pattern (close to the heart of every compiler implementer) “so that you can define expectations and stubs using the same syntax that you use to call methods”. Again, a good chance to learn new, powerful dynamic programming techniques.

As you can see, writing tests can be, a bit unexpectedly, actually fun.

Tags: , , , , , ,

Developing LISA Pathfinder

Lisa Since last June, i’m working on a project called LISA Pathfinder, a forerunner of the future space-based LISA Gravitational Wave Detector. Pathfinder is a joint NASA/ESA effort, and Spain is in charge of developing both the hardware and software for the so-called Data Management Unit flying in the mission’s only satellite.

I’m mostly on the software part. The team is small and we have no architecture astronauts around: everyone’s is trying its very best to make this fly, instead of playing the corporate naming game. And i even have some old good friend in the team.

As for the more techie stuff, the good news is that this is a pretty challenging project: we’re programming an embedded system that will be running on a 14MHz ERC32 chip and one of its components must fit in a mere 32K PROM; a second component will have at its disposal the huge amount of 1Mb RAM, enough to run a real-time kernel with leisure. The not-so-good news is that this is the realm of C, so, in principle, one does not expect dynamic languages all over the place. And yet, we have managed to sneak into our tool chain some cool stuff.

C++ was discarded very early, and with it all the associated UML, RUP, XML and object-oriented-as-the-cure-of-all fanfare. Been there, done that. All this industry-standard nonsense serves mostly, in my experience, to convey a warm fuzzy feeling to incompetent managers who hear about technology in cool IT sites for the Enterprise Professionals and the Architect in you. Fortunately, my immediate bosses are not that clueless, and we opted for old good C and a Yourdon-like design methodology (PDF) invented by Ward and Mellor in the eighties to model real-time systems. Funny how some people look at you over their shoulder, almost with contempt, for your using such obviously outdated stuff. Architecture Astronauts. Again.

But don’t let me started. As i was saying, C makes for a good high-level assembly when used properly, and we’re trying hard to use it properly. Since we’re very close to the metal, we have also, every now and then, very interesting excursions into RISC assembly. I cannot possibly overstate how enriching working this close to the hardware is for any programmer. Don’t miss the chance when it appears. An assembly language, specially if it is of the RISC family, belongs into your toolbox, even if you’re programming in Lisp.

Emacs is all around. We have used its almost boundless plasticity to automate every imaginable development task, tailored to the tiniest detail. Writing a handful of Elisp functions is usually all that’s needed to adapt the environment to you exact necessities. It’s like having an IDE written for you and your project’s needs. You adapt your environment to your problem, not the other way around, gluing together all the tools you use (version control, document generators, linting tools, you name it) in the smoothest way. No other IDE of jour, however flashy or dead-on-easy-to-use.

As for version control, we bobbed between darcs and bazaar for a while, and finally chose the latter. I’m not sure it was the right decision. Since then, i’ve grown fonder of darcs’ elegance, its simplicity without sacrificing power. Moreover it has some features than i miss in bazaar, most notably the ability to record patches on a per hunk basis. In its day, we opted for bazaar because of its capacity of versioning symbolic links and, mostly, its great Emacs integration via xtla. CVS and svn where discarded from the start because we think that distribution and patch-oriented commits are indispensable in any decent SCM.

I would have loved using Conjure as our build system, but, unfortunately, we are by no means there yet. So we opted for Scons, which makes for a pretty good Make replacement in terms of versatility and abstraction power (which boils down, at the end of the day, to reusability). Besides, albeit not as beautiful as Scheme, Python is a decent language and we already have some knowhow in our team (the rest of us have now the excuse for learning it—let me say in passing than a continuous itch for learning is top on my list of the Top Ten Qualities of Good Programmers). Python also is our language for the infrastructure chores that Emacs does not handle, namely, all the batch tasks like automatic test suite running and reporting or our nightly build.

We also use Python to program the hardware simulators needed for the integration and system tests. As a matter of fact, our test cases are Python scripts. The degree of flexibility we gain by writing them in a dynamic language is so huge and evident that i cannot help getting upset everytime an astronaut passing by asks me, with an skeptical look and suspicion laden undertones, why aren’t we using C or C++.

Last but not least, all our documents are written in LaTeX. Those of you working in corporate environments involving lots of companies will surely understand how happy i feel about this. I spent a weekend writing a LaTeX document class that mimics the Word style everyone else in the project is using. Now, our documents not only look better. We have them in version control, like the rest of the code, and a growing library of scripts takes care of generating cross-indexes, synchronising code and documentation, checking for untraced requirements, generating the proverbial traceability matrices, maintaining centralised lists of acronyms and bibliography, and so on and so forth. And of course, i don’t have to leave Emacs to write my docs.

I can think of better projects, and better ways of getting things done, but, all in all, this one is not that bad.

Thanks for reading.

Tags: , ,

Follow

Get every new post delivered to your Inbox.

Join 26 other followers