These are the posts and pages that got the most views in 2010.
A Haskell bookshelf March 2006
7 comments
A Scheme bookshelf January 2007
18 comments
Scheme lectures, mostly December 2009
7 comments
Programmers go bananas March 2006
23 comments
Geiser May 2009
35 comments
Data and procedures and the values they amass,
Higher-order functions to combine and mix and match,
Objects with their local state, the messages they pass,
A property, a package, the control point for a catch–
In the Lambda Order they are all first-class.
One Thing to name them all, One Thing to define them,
One Thing to place them in environments and bind them,
In the Lambda order they are all first-class.
(Hat tip Mark H Weaver over at guile-devel)
My favourites were the two invited papers. The first day, Olin Shivers presented a pretty cool hack in a delicious talk consisting in actually writing the code he was explaining. Under the title Eager parsing and user interaction with call/cc, he showed us how call/cc is not just an academic toy, but can be put to good use in writing a self-correcting reader for s-expressions on top of the host scheme read
(or any other parser, for that matter). He started from the very basics, explaining how input handling and buffering is usually delegated to the terminal driver, which offers a rather dumb, line-oriented service. Wouldn’t it be nice if, as soon as you typed an invalid character (say, a misplaced close paren) the reader complained, before waiting for the whole line to be submitted to read
? Well, all we need to do is to implement the input driver in scheme, and he proceeded to show us how. As you know, reading s-expressions is a recursive task, meaning that when you detect invalid input in the middle of a partial s-expression, or want to delete a character, you might find yourself somewhere deep inside a stack of recursive calls and you’ll need to backtrack to a previous checkpoint. That’s an almost canonical use case for call/cc
, provided you use it intelligently. Let me tell you that Olin is quite capable of using call/cc
as it’s meant to be used, as he immediately demonstrated. I’m skipping the details in the hope that a paper will be available any time soon. As i mentioned, his talk was a beautiful example of live coding: he showed us the skeleton of the implementation and filled it up as he explained how it should work. Olin does know how to write good code, and it was a pleasure (and a lesson) seeing him doing just that. It was all so schemish: a terminal and emacs in the venerable twm: that’s all you need to create beauty.
The second invited talk was by Robby Findler, who gave us a tour of Racket’s contract system, how to use it and the subtleties of implementing it properly. The basic idea dates back to Meyer’s design by contract methodology of the early nineties, and was subsequently explored further by several authors, including Robby. Simple as they sound at first sight, good contracts are not trivial to implement. For instance, it’s vital to assign blame where’s blame is due, and Robby gave examples of how tricky that can get (and how Racket’s contracts do the right thing). Another subtlety arises when you try to write contracts assessing a property of an input data structure (say, you want to ensure that an argument is actually a binary search tree). The problem here is that checking the contract can alter the asymptotic complexity of the wrapped function (e.g., you can go from O(log n) to O(n) in a lookup, an exponential degradation). Racket provides an ingenious fix for that problem, by means of lazy contracts that are checked as the input is traversed by the “real” function.
There was also real-time scheme in Robby’s talk, although in a much more sophisticated way, thanks to Slideshow’s magic, which lets you embed code files and snippets in a presentation, evaluate them and show the results in the same or a new slide. Very elegant. He also used DrRacket a bit during the introductory part of his talk, and i’m starting to understand why some people are so happy with Racket’s IDE: it definitely felt, in his hands, professional and productive. And also kind of fun.
There were also lightning talks. I’m of course biased, but the one i enjoyed most was Andy Wingo’s Guile is OK!, where he showed us how Guile has overcome the problems, perceptual and real, of its first dozen years. For instance, he reminded us how Guile was traditionally a “defmacro scheme”, and he himself a “defmacro guy”… until he studied in earnest Dyvbig’s work and ported his syntax-case implementation to Guile, to become a “syntax-case man” as Guile gained full syntax-case support (i hope i’ll reach that nirvana some day; i still find syntax-case too complex and plagued by unintuitive corner cases (one of them was showed by Aaron Hsu in another lightning talk, where apparently none of us was able to correctly interpret 10 lines of scheme) that make me uneasy; but that’s surely just ignorance on my part). There are many other things that make Guile a respectable citizen of the Scheme Underground, which were also listed in Andy’s talk: i’ll ask him for a PDF, but in the meantime you can just try Guile and see :).
Although this time we didn’t have a talk by Will Clinger, to me it’s always a pleasure to listen to what he has to say, even if only as comments to other people’s talks. For instance, i enjoyed his introduction to Alex Shinn’s R7RS progress report. Will showed us three one dollar coins, of the same size and shape, but different, as he described, in almost everything else. And yet, all three were useful and recognised as (invalid) dollars by the Canadian vending machines at the entrance. He thinks that says something about standards, but he left to us to decide exactly what.
Finally, let me mention that this workshop has alleviated all my quibbles with what i’ve sometimes perceived as a fragmented, almost dysfunctional, community, made up of separate factions following their own path in relative isolation. My feeling during the workshop was nothing of the sort; rather, i’m back with the conviction that there’s much more uniting us that breaking us apart, and that there’s such a thing as a scheme underground ready to take over the world. Some day.
When we created mozilla.org and released (most of) the source code to Netscape Confusicator 4.x, Netscape’s lawyers made us go through a big “sanitization” process on the source code. [...] they also made us take out all the dirty words. Specifically, “any text containing vulgar or offensive words or expressions; any text that might be slanderous or libelous to individuals and/or institutions.”
There follows a list of heavy swearing code snippets and comments. Strikingly, one of the censored words is hack. WTF?
Measure your OODA loop [observe, orient, decide and act loop] in all the languages you know. See which one cycles fastest. I’d bet that’s your favorite language.
I also happen to like the movies he recommends ;-)
Update: See also Patent Absurdity for a 30 min video where Eben Moglen, Karen Sandler and others talk about the history and present of software patents.
Update:I’ve just learnt, thanks to a comment below, that there’s actually an earlier chess program, written by Peter Jennings, of similar characteristics: Microchess for the Kim-1 computer. The linked webpages contain lots of information about the program, including listings. Pretty fun stuff.
Thanks!
Just as every day thoughts are expressed in natural language, and formal deductions are expressed in mathematical language, methodological thoughts are expressed in programming languages. A programming language is a method for communicating methods, not just a means for getting a computer to perform operations–programs are written for people to read as much as they are written for machines to execute.
and later on
[...]if a methodology is to be robust, it must have more generality than is needed for the particular application. The means for combining the parts must allow for after-the-fact changes in the design plan as bugs are discovered and as requirements change. It must be easy to substitute parts for one another and to vary the arrangement by which parts are combined
The examples include Henderson’s picture language, which motivates a discussion of metalinguistic abstraction (or DSLs, as rediscovered these days):
Part of the wonder of computation is that we have the freedom to change the framework by which the descriptions of processes are combined. If we can precisely describe a system in any well-defined notation, then we can build an interpreter to executed programs expressed in the new notation, or we can build a compiler to translate programs expressed in the new notation into any other programming language.
There’s also a synopsis of the implementation, in Scheme, of a rule language geared at simplifying algebraic expressions, and the accompanying interpreter. The conclusion is that you want to write languages and interpreters, in a variety of paradigms; and that you probably want to write them using Lisp:
The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm [...] A linguistic approach to design is an essential aspect not only of programming but of engineering design in general. Perhaps that is why Lisp [...] still seems new and adaptable, and continues to accommodate current ideas about programming methodology.
As true today as it was twenty odd years ago, if you ask me.
And i cannot help mentioning neither Gerry Sussman’s The role of programming in the formulation of ideas
(and its accompanying article) nor Oleg Kiselyov’s Normal-order syntax-rules and proving the fix-point of call/cc:
for which you’ll need the accompanying slides. Although not specifically about Scheme, Steele’s Growing a language is also a must see.
And don’t miss this gallery for some really beautiful ones.