SICP distilled

If you’re still thinking about reading SICP, here’s an article, written by Abelson and Sussman shortly after publishing the first edition of the book, that may dispel your doubts: Lisp: A language for stratified design. It’s a quick distillation of some of the central themes discussed at length in SICP, with accompanying code. Some of them may seem old hat these days (the article was published in 1987), but they’re as relevant as they were back in the day, and it’s difficult to find them exposed in as good (let alone a better) a way. Its dozen pages are full of quotable pearls of wisdom. For instance, right from the start:

Just as every day thoughts are expressed in natural language, and formal deductions are expressed in mathematical language, methodological thoughts are expressed in programming languages. A programming language is a method for communicating methods, not just a means for getting a computer to perform operations–programs are written for people to read as much as they are written for machines to execute.

and later on

[...]if a methodology is to be robust, it must have more generality than is needed for the particular application. The means for combining the parts must allow for after-the-fact changes in the design plan as bugs are discovered and as requirements change. It must be easy to substitute parts for one another and to vary the arrangement by which parts are combined

The examples include Henderson’s picture language, which motivates a discussion of metalinguistic abstraction (or DSLs, as rediscovered these days):

Part of the wonder of computation is that we have the freedom to change the framework by which the descriptions of processes are combined. If we can precisely describe a system in any well-defined notation, then we can build an interpreter to executed programs expressed in the new notation, or we can build a compiler to translate programs expressed in the new notation into any other programming language.

There’s also a synopsis of the implementation, in Scheme, of a rule language geared at simplifying algebraic expressions, and the accompanying interpreter. The conclusion is that you want to write languages and interpreters, in a variety of paradigms; and that you probably want to write them using Lisp:

The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm [...] A linguistic approach to design is an essential aspect not only of programming but of engineering design in general. Perhaps that is why Lisp [...] still seems new and adaptable, and continues to accommodate current ideas about programming methodology.

As true today as it was twenty odd years ago, if you ask me.

Scheme lectures, mostly

Here’s a list of (mostly) scheme video lectures, based on the links posted in this thread of the PLT mailing list, with some extra bits taken from my own collection:

  • SICP lectures by Abelson and Sussman (see also here for more lightweight versions), still my all-time favourite lecture series on any field.
  • Also based on SICP, Brian Harvey’s course on UCB is quite fun (and, while you’re at it, you may find interesting other UCB courses too). And the ADU course comes with lots of additional notes and materials, although i’ve only skimmed over it and i don’t really know how good it is. But let me tell you that, if it’s as good as Shai Simonson‘s course on Theory of computation, it’s probably worth your time.
  • All videos from this year’s ICFP are available online. Among them, one can find Jay McCarthy’s talk on RESTful webapps in Scheme using serializable continuations; Matthew Flatt on PLT’s Scribble documentation system; Matthias Felleisen on how he and his collaborators are using purely functional programming for teaching kids and making them have fun; and Ralf Hinze’s La Tour D’Hanoï, which is not about scheme but is a pearl anyway (as is Ryan Newton’s report on how he used functional programming to implement an embedded bird detector via a parallel DSL with some metaprogramming for a good measure).
  • During the latest GNU Hackers meeting, Andy Wingo talked about recent developments on Guile (bittorrent file).
  • Robby Findler on Why macros matter is a little nice introduction on how one can use macros not only in cases that lazy languages handle gracefully, but, more importantly, as a full-fledged language definition device.

  • A clip featuring Shriram Khrisnamurthi, where he introduces WeScheme, a pretty interesting scheme-in-a-browser environment based on PLT’s Moby platform, which Shriram presented at the latest ILC; although there’s no video of that talk available, you can get the slides and a sound recording here (as is usually the case with Shriram, definitely worth your time).
  • A talk by Matthias Felleisen on the evolution of Northeastern University’s CS curriculum, where scheme plays a central role.

  • The DanFest videos, which i’ve mentioned before one or two times. Virtually all lectures in this series are worth watching, but, if Robby’s talk above picked your curiosity, don’t miss Ken Dyvbig’s Macro Writer’s Bill of Rights video and slides.


    And i cannot help mentioning neither Gerry Sussman’s The role of programming in the formulation of ideas


    (and its accompanying article) nor Oleg Kiselyov’s Normal-order syntax-rules and proving the fix-point of call/cc:

  • As an aside, if you like Gerry as much as i do, you might be interested in hearing him sharing his enthusiasm for mechanical watches for a change. Or, if insist staying (more or less) on topic, take a look at his The Legacy of Computer Science, for Gerry’s take on how CS is making us smarter.
  • Guy Steele’s Designing by Accident is a very interesting history on how Scheme came to life and its relationship with actors:


    for which you’ll need the accompanying slides. Although not specifically about Scheme, Steele’s Growing a language is also a must see.

  • You will have to tell me how good Matthiew Flatt’s Processes without partitions is, because it seems to be available only in formats that i cannot play on debian.
  • In The 90 minute Scheme to C compiler talk Mark Feeley shows how closure conversion and CPS can be used to put together a Scheme compiler (ninety minutes being the time needed to explain it, mind you).

  • Those of you new to scheme and functional programming may enjoy this lecture by Jerry Cain, pertaining to Stanford’s course on Programming Paradigms, which discusses functional programming using Kawa scheme. Not stellar, but not bad as an introduction.

  • Finally (for now), some fun livecoding with Fluxus:


    And don’t miss this gallery for some really beautiful ones.

Posted in Scheme. 9 Comments »

Comprehensive Hutton

After yawning during the first 10 installments of the series, i’ve just watched a really beautiful talk (with accompanying slides and literate code) by guest star Graham Hutton in the C9 Lectures webcast [0]. Graham devotes it to show how to solve the countdown puzzle for numbers (find ways of combining n integers to obtain a target one, using basic arithmetic operations) in Haskell. The lecture is a very nice show off of list comprehensions and how they allow the elegant expression of solutions to certain kind of problems (basically, tree searches in this case), without precluding incremental optimizations (pruning of the search space in the problem at hand). In addition, you’ll see a very simple, but enlightening, example of program fusion as a technique to boost performance. The code is beautiful and straightforward and, most notably, retains those qualities even after optimization. Hutton is a good speaker and teacher, and the lecture is accessible to anyone with a bare-bones knowledge of Haskell. And the fun doesn’t end with the video: there’s also an accompanying paper, most aptly included in the functional pearls series, providing all the details, code and proofs. A discussion is also present in Graham’s book, which, to judge from the quality of the paper, must be an excellent way to introduce yourself to Haskell and functional programming.

—-
[0] I keep reading praise for these lectures, and everybody seems to adore E. Meijer. Yet i must confess that i find these lectures incredibly bad: to my taste, the lecturer is dull, often misses the point, keeps drawing dubious parallelisms and, every now and then, just makes plain (and, i’m tempted to say, glaring) conceptual errors. At the risk of losing all my credibility, if any, i wholeheartedly recommend you not to spend any time in the first ten lectures of this series. I’d take any talk by SPJ any day: that is passion and insight. Or by Hutton, for that matter.

The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers