Comprehensive Hutton

After yawning during the first 10 installments of the series, i’ve just watched a really beautiful talk (with accompanying slides and literate code) by guest star Graham Hutton in the C9 Lectures webcast [0]. Graham devotes it to show how to solve the countdown puzzle for numbers (find ways of combining n integers to obtain a target one, using basic arithmetic operations) in Haskell. The lecture is a very nice show off of list comprehensions and how they allow the elegant expression of solutions to certain kind of problems (basically, tree searches in this case), without precluding incremental optimizations (pruning of the search space in the problem at hand). In addition, you’ll see a very simple, but enlightening, example of program fusion as a technique to boost performance. The code is beautiful and straightforward and, most notably, retains those qualities even after optimization. Hutton is a good speaker and teacher, and the lecture is accessible to anyone with a bare-bones knowledge of Haskell. And the fun doesn’t end with the video: there’s also an accompanying paper, most aptly included in the functional pearls series, providing all the details, code and proofs. A discussion is also present in Graham’s book, which, to judge from the quality of the paper, must be an excellent way to introduce yourself to Haskell and functional programming.

—-
[0] I keep reading praise for these lectures, and everybody seems to adore E. Meijer. Yet i must confess that i find these lectures incredibly bad: to my taste, the lecturer is dull, often misses the point, keeps drawing dubious parallelisms and, every now and then, just makes plain (and, i’m tempted to say, glaring) conceptual errors. At the risk of losing all my credibility, if any, i wholeheartedly recommend you not to spend any time in the first ten lectures of this series. I’d take any talk by SPJ any day: that is passion and insight. Or by Hutton, for that matter.

The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

flib

Update We’ve moved the date of our first meeting to June 17th, so you’re still in time to join us! If you want to follow our adventures, you can also ask for an invitation to our mailing list.

The other day, Andy and I met Jos, an experienced schemer who lives near Barcelona, with the idea of having lunch, talking about Scheme, and create a Scheme Users Group. After a bit of discussion, we agreed on widen the group’s scope, and start what we’re calling Fringe Languages In Barcelona (FLIB). The plan is to conduct periodic meetings with a main presentation followed by some lightning talks (the latter were a complete success at ILC, and we’d like to try and see how they work for us), with as much discussion interleaved as we see fit. We’ll have some refreshments available and, since we’re meeting in the very center of the old city, visits to pubs or a restaurant for dinner and further socializing are to be expected.

As i said, we’re expecting much discussion about Scheme and Lisp, but we’re not ruling out by any means other fine languages. For instance, the talk for the inaugural session (scheduled June 10th17th, 7:30 pm) is entitled The implementation of FUEL, Factor’s Ultimate Emacs Library, and it will include a short introduction to Factor (yes, i am the victim speaker). Jos will come next, the same day, with a lightning talk about PLT Redex. We have free slots for more lighting talks: you are invited not only to come, but to give one if you’re so inclined. This being our first meeting, there will be also some time for logistics and organisation.

So, if you’re near here by then, by all means, come in and join the fun:

Calle del Pi 3 Principal Interior (first floor)
Barcelona

Not really needed, but if you’re thinking about coming, sending me a mail beforehand will help us to be sure that we’ve got enough food and drinks.

We’re looking forward to getting FLIB started, and we’re sure that at least grix more fringers are coming! Don’t miss it!

A Taste of Haskell

SimonI finally found time to watch Simon Peyton-Jones’ recent OSCON 2007 tutorial, A Taste of Haskell (split in Part I and II, around 90 minutes each). From the many comments around and a quick look at the slides, I knew it was a basic intro for non-functional programmers and kept wondering if it would be worth the time. From this you can easily infer that either i am a moron or hadn’t seen a talk of Simon’s before. Not that it discards totally the first option, but actually i hadn’t! As it comes out, Simon is a delight to hear and see in action. He’s full of energy and enthusiasm and knows how to transmit his passion for what he does. In this regard, this talk reminds me of Abelson and Sussman’s SICP videos. These guys are not your ivory-tower academic type. They have such a great time doing what they do that they cannot help showing it off. Perhaps it’s because they got their hands dirty implementing the systems they theorize about. At any rate, Simon’s tutorial is absolutely worth watching. If you’re new to Haskell and functional programming, you belong to the perfect audience for this crash course. It is also remarkable that the attendees were quite participative and made lots of questions that made the lecture quite lively. That and Simon’s refreshing sense of humour, which makes these videos great fun even for those of you knowing everything about monads. Enjoy!

Posted in Haskell. 3 Comments »

Quantum hype

One of the things i would really, really like to see some day is a working quantum computer. Quantum mechanics is deep magic that nobody really understands, but we have learnt a lot about how to use it during the last century–including its application to some kinds of computation. As you surely know, the most outstanding quantum algorithm is Shor’s prime factorization, which allows factoring a number N with a time complexity O({(\log N)}^3). That means that we go from exponential to polynomial time when going from classical to quantum for this particular problem (and related ones: the Wikipedia article on QC gives a pretty good survey; see also David Deutsch’s introductory lectures). I’m stressing the last point because there’s a widespread misconception that quantum computers will be able to solve NP-complete problems in polynomial time. Not so. On the contrary, experts are almost sure by now that this won’t be the case (note, by the way, that factoring is not NP-complete).

The most recent examples of such bogus claims are the reports on D-wave’ demos of their ‘quantum computer’, which are surrounded by piles of hype. So please, before taking them at face value, see Scott Aaronson’s The Orion Quantum Computer Anti-Hype FAQ (more here here here). Scott Aaronson is an expert in the field and the author of a PhD thesis under the title Limits on Efficient Computation in the Physical World (for a less technical introduction to quantum computing, see his nice Quantum Computing Since Democritus lectures). For an executive summary, here’s the first entry in the FAQ:

  • Q: Thanks to D-Wave Systems — a startup company that’s been in the news lately for its soon-to-be-unveiled “Orion” quantum computer — is humanity now on the verge of being able to solve NP-complete problems in polynomial time?
  • A: No. We’re also not on the verge of being able to build perpetual-motion machines or travel faster than light.

The old rule applies: no silver bullet. But, of course, their limitations notwithstanding, quantum computers would (will?) be an interesting challenge for us programmers, and we do not have to wait for the hardware to play with them: see this Brief survey of quantum programming languages, or a more in-depth description of how an imperative quantum programming language looks like, although, if you ask me, functional quantum languages like QML are nicer. Simon Gay has also put together a comprehensive Bibliography of Quantum Programming Languages.

Finally, if you’d rather write some code, there’s André van Tonder’s Scheme simulator (which will work with any R5RS scheme), and a QML simulator written in Haskell. Haskellers will also enjoy Jerzy Karczmarczuk’s Structure and Interpretation of Quantum Mechanics: a functional framework.

Happy quantum hacking!

Writing A Lisp Interpreter In Haskell

Chances are you’ve already seen mentioned somewhere defmacro’s Writing A Lisp Interpreter In Haskell, but just in case:

A while ago, after what now seems like eternity of flirting with Haskell articles and papers, I finally crossed the boundary between theory and practice and downloaded a Haskell compiler. I decided to do a field evaluation of the language by two means. I was going to solve a problem in a domain that Haskell is known to excel at followed by a real world problem that hasn’t had much exploration in Haskell. Picking the problems was easy. There’s a lot of folklore that suggests Haskell is great for building compilers and interpreters so I didn’t have to think long to a pick a problem that would be self contained, reasonably short, and fun – writing an interpreter of a Lisp dialect.

Also interesting in defmacro.org, these ramblings on The Nature of Lisp or on Functional Programming for the rest of us.

I’ve not read these articles in full, but a first skimming over them left a pretty good impression.

Becoming a Haskell developer

YhcIf you like programming languages and compiler and have a few free time in your hands, please take a look at this roadmap to become a YHC developer. The York Haskell Compiler is a relatively new effort to write a, well, Haskell Compiler. Its development team looks all but newbie-friendly, and the job is one of the most interesting (if a bit underpaid) i’ve found lately. Right now, the YHC seems to be under (heavy, one would say) development. It’s based on nhc98, but already outperforms it (as well as the Hugs interpreter). Another interesting trait: it compiles to cross-plaftorm bytecode, runnable from a small runtime system (and there’s even a little surprise to all you pythonistas). This nice presentation (PDF) gives some further details on the system’s innards.

Definitely, worth a closer look.

Haskell goes mainstream

Well, if this LtU news is to be believed, since a couple of weeks ago, Linspire is using Haskell as its main OS development language:

The OS team at Linspire, Inc. would like to announce that we are standardizing on Haskell as our preferred language for core OS development.

We are redoing a bunch of our infrastructure using Haskell as our common standard language. Our first task is redoing our Debian package builder (aka autobuilder) in Haskell. Other tools such as ISO builders, package dependency checkers are in progress. The goal is to make a really tight simple set of tools that will let developers contribute to Freespire, based on Debian tools whenever possible. Our hardware detector, currently in OCaml, is on the block to be rewritten as well.

This surprising announcement was made at the Debian Haskell Mailing List, since Linspire (and its free variant, Freespire) is based in Debian. This is what these brave developers have to say of the topical worries against such a move:

I mention Freespire because some of our colleagues were concerned that using Haskell would isolate us from the larger community of developers and make it hard to find new employees skilled in Haskell, should we need to. From our perspective, functional programming makes us more effective and we think that getting even a few people who know Haskell hacking with us is a better combination than lots of Perl and bash.

If it is a late April’s Fools joke, please someone tell me!

Posted in Haskell. 2 Comments »

A Haskell bookshelf

My favourite scheme implementation is scheme48, which takes its name from its being initially implemented in 48 hours by Richard Kelsey and Jonathan Rees in August 1986. They used Common Lisp on a Symbolics 3600 and Maclisp on a PDP-10.

Now, thanks to Jonathan Tang’s tutorial, you can write yourself a scheme in 48 hours, but using Haskell instead of Lisp. The tutorial is intended as an introduction to Haskell for Schemers (and (brave) programmers in general) wanting to get started in the purest member of the functional family. And it is extremely fun, for instead of following the boring language features overview way, it plunges right on in interesting, hands-on stuff like parser combinators (one of the most beautiful applications of Haskell, if you ask me) or monads. The lessons include exercises, so that you have really no excuse for not learning Haskell once and for all. Believe me, it’s good for your (mental) health.

If you’re totally new to Haskell and find the pace of Jonathan’s tutorial a bit hard to follow, you may start with a tour of the Haskell syntax and the Hitchikers’ guide to Haskell, after making sure, of course, that you have your towel at hand. As for me, well i’ve been planning to learn Haskell for some years now and, since i happen to love books, i have accumulated a little (but selected) Haskell library, together with a sort of learning roadmap that, unfortunately, i have yet to complete. Just in case the above links whetted your appetite and you happen to have more time than i do, here you have my bookworm guide to Haskell enlightenment.

The little Haskeller

There are many introductory Haskell books, but i was (at the time i started learning) keen of those providing also a good basis on functional programming (which was a totally new world for me at the time). Therefore, my first Haskell book, and still a wholehearted recommendation, was Richard Bird and Philip Wadler‘s Introduction to Functional Programming Using Haskell, one of the most elegant books on programming i’ve ever read.

One of the nicest things about writing Haskell code is that it’s the closest one can probably get to writing pure maths while programming. Thus, learning Haskell is an excellent way to learn more about maths and logic. If that sounds to you like doubling the fun (and you’re not yet a math wizard), The Haskell Road to Logic, Maths and Programming (see also this extensive review (PDF)) by Jan van Eijck and Kees Doets is definitely the book for you. Its purpose is to teach logic and mathematical reasoning in practice, and to connect logical reasoning with Haskell programming. And, in my opinion, it does a pretty good job at that. Although it begins with the very basics, it includes chapters on far from trivial (i.e., fun!) stuff like, for instace, corecursion or Cantor sets. And i found amazing how natural it was to express logical and mathematical ideas in Haskell.

But maybe you prefer to learn a bit of applied Computer Science, instead of abstract maths, with Haskell. No problem: just grab a copy of Fethi Rabhi and Guy Lapalme‘s Algorithms, a functional programming approach, which challenges the academic establishment by teaching algorithms using Haskell, that is, in a purely functional context. Revisiting classical (and apparently imperative) algorithms like sorting, tree and graph traversal or dynamic programming methods under a functional light is a refresing experience, and an eye-opener: the fact that all the typical algorithms are covered in just 256 pages is a testament to the authors’ claim that functional programming leads to smaller, clearer and more elegant program. Besides, as you probably know, Haskell features lazy evaluation, which poses entirely new (and pretty instructive) challenges when it comes to evaluating the efficiency of an algorithm.

The seasoned haskeller

fopOnce you’re comfortable with the language and have learnt all about monads, the best thing to do is to study non-trivial Haskell applications. To that end, you can hardly find a better book than The fun of programming, a Festschrift celebrating Richard Bird’s sixtieth birthday. Including thirteen chapters by luminaries in the field, this book describes fun applications on such fun things as musical composition or graphical design, as well as covering advanced programming techniques such as data structures design, interpreters or optimization. Amazing.

I also own a copy of Paul Hudak‘s The Haskell School of Expression, which is often recommended as as Haskell primer. In my opinion, it goes too quickly into the nitty-gritty details of (music and multimedia) implementations to be a good first book, but it is probably a good second book on Haskell programming. But that’s just me: as you can read in the review above, some people have diverging opinions.

The fun never ends…

After all these readings, and a bit more about monads, you’ll begin to feel an expert, and may be interested in more advanced stuff. I’ve got a couple of books reserved for that moment. The first one is The Algebra of Programming, by Richard Bird and Oege de Moor, and its purpose, simply stated, is to show how to calculate programs algebraically using and old friend of ours: category theory, including the bananas that will be the main theme of my second post on these matters. There you’ll find and in-depth, abstract treatment of algorithmic strategies with an eye of correctness proofs. Although heavy on theory, there are interesting applications of this algebra of programming, like the conversion between binary and decimal numbers in TeX, the best way of drawing a cylinder in LaTeX or good data compression algorithms. A slight nuisance is that this book uses an old Haskell dialect (a precursor, actually) called Gopher, but the conversion to modern notation is pretty straightforward.
Finally, anyone serious about algorithms and data structures should read Chris Okasaki’s Purely Functional Data Structures, and amazing tour on advanced algorithimcs using functional programming languages, with source code in SML and Haskell. The exposition is extremely elegant and synthetic, and i’ve spent many, many hours immersed in its 220 pages (i can’t think of any other programming book with a better signal to noise ratio, really).

That’s it. I just hope that 48-hours days are invented any time soon. Happy reading!