At the workshop

I just came back from the Scheme and Functional Programming Workshop at Montréal, hosted by Marc Feeley and an excellent organisation team. It’s been a fun couple of days putting faces to emails and IRC nicks and attending a handful of pretty interesting talks. Here’s a quick report.

My favourites were the two invited papers. The first day, Olin Shivers presented a pretty cool hack in a delicious talk consisting in actually writing the code he was explaining. Under the title Eager parsing and user interaction with call/cc, he showed us how call/cc is not just an academic toy, but can be put to good use in writing a self-correcting reader for s-expressions on top of the host scheme read (or any other parser, for that matter). He started from the very basics, explaining how input handling and buffering is usually delegated to the terminal driver, which offers a rather dumb, line-oriented service. Wouldn’t it be nice if, as soon as you typed an invalid character (say, a misplaced close paren) the reader complained, before waiting for the whole line to be submitted to read? Well, all we need to do is to implement the input driver in scheme, and he proceeded to show us how. As you know, reading s-expressions is a recursive task, meaning that when you detect invalid input in the middle of a partial s-expression, or want to delete a character, you might find yourself somewhere deep inside a stack of recursive calls and you’ll need to backtrack to a previous checkpoint. That’s an almost canonical use case for call/cc, provided you use it intelligently. Let me tell you that Olin is quite capable of using call/cc as it’s meant to be used, as he immediately demonstrated. I’m skipping the details in the hope that a paper will be available any time soon. As i mentioned, his talk was a beautiful example of live coding: he showed us the skeleton of the implementation and filled it up as he explained how it should work. Olin does know how to write good code, and it was a pleasure (and a lesson) seeing him doing just that. It was all so schemish: a terminal and emacs in the venerable twm: that’s all you need to create beauty.

The second invited talk was by Robby Findler, who gave us a tour of Racket’s contract system, how to use it and the subtleties of implementing it properly. The basic idea dates back to Meyer’s design by contract methodology of the early nineties, and was subsequently explored further by several authors, including Robby. Simple as they sound at first sight, good contracts are not trivial to implement. For instance, it’s vital to assign blame where’s blame is due, and Robby gave examples of how tricky that can get (and how Racket’s contracts do the right thing). Another subtlety arises when you try to write contracts assessing a property of an input data structure (say, you want to ensure that an argument is actually a binary search tree). The problem here is that checking the contract can alter the asymptotic complexity of the wrapped function (e.g., you can go from O(log n) to O(n) in a lookup, an exponential degradation). Racket provides an ingenious fix for that problem, by means of lazy contracts that are checked as the input is traversed by the “real” function.

There was also real-time scheme in Robby’s talk, although in a much more sophisticated way, thanks to Slideshow’s magic, which lets you embed code files and snippets in a presentation, evaluate them and show the results in the same or a new slide. Very elegant. He also used DrRacket a bit during the introductory part of his talk, and i’m starting to understand why some people are so happy with Racket’s IDE: it definitely felt, in his hands, professional and productive. And also kind of fun.

There were also lightning talks. I’m of course biased, but the one i enjoyed most was Andy Wingo’s Guile is OK!, where he showed us how Guile has overcome the problems, perceptual and real, of its first dozen years. For instance, he reminded us how Guile was traditionally a “defmacro scheme”, and he himself a “defmacro guy”… until he studied in earnest Dyvbig’s work and ported his syntax-case implementation to Guile, to become a “syntax-case man” as Guile gained full syntax-case support (i hope i’ll reach that nirvana some day; i still find syntax-case too complex and plagued by unintuitive corner cases (one of them was showed by Aaron Hsu in another lightning talk, where apparently none of us was able to correctly interpret 10 lines of scheme) that make me uneasy; but that’s surely just ignorance on my part). There are many other things that make Guile a respectable citizen of the Scheme Underground, which were also listed in Andy’s talk: i’ll ask him for a PDF, but in the meantime you can just try Guile and see :).

Although this time we didn’t have a talk by Will Clinger, to me it’s always a pleasure to listen to what he has to say, even if only as comments to other people’s talks. For instance, i enjoyed his introduction to Alex Shinn’s R7RS progress report. Will showed us three one dollar coins, of the same size and shape, but different, as he described, in almost everything else. And yet, all three were useful and recognised as (invalid) dollars by the Canadian vending machines at the entrance. He thinks that says something about standards, but he left to us to decide exactly what.

Finally, let me mention that this workshop has alleviated all my quibbles with what i’ve sometimes perceived as a fragmented, almost dysfunctional, community, made up of separate factions following their own path in relative isolation. My feeling during the workshop was nothing of the sort; rather, i’m back with the conviction that there’s much more uniting us that breaking us apart, and that there’s such a thing as a scheme underground ready to take over the world. Some day.

Posted in Scheme. 2 Comments »

SICP distilled

If you’re still thinking about reading SICP, here’s an article, written by Abelson and Sussman shortly after publishing the first edition of the book, that may dispel your doubts: Lisp: A language for stratified design. It’s a quick distillation of some of the central themes discussed at length in SICP, with accompanying code. Some of them may seem old hat these days (the article was published in 1987), but they’re as relevant as they were back in the day, and it’s difficult to find them exposed in as good (let alone a better) a way. Its dozen pages are full of quotable pearls of wisdom. For instance, right from the start:

Just as every day thoughts are expressed in natural language, and formal deductions are expressed in mathematical language, methodological thoughts are expressed in programming languages. A programming language is a method for communicating methods, not just a means for getting a computer to perform operations–programs are written for people to read as much as they are written for machines to execute.

and later on

[...]if a methodology is to be robust, it must have more generality than is needed for the particular application. The means for combining the parts must allow for after-the-fact changes in the design plan as bugs are discovered and as requirements change. It must be easy to substitute parts for one another and to vary the arrangement by which parts are combined

The examples include Henderson’s picture language, which motivates a discussion of metalinguistic abstraction (or DSLs, as rediscovered these days):

Part of the wonder of computation is that we have the freedom to change the framework by which the descriptions of processes are combined. If we can precisely describe a system in any well-defined notation, then we can build an interpreter to executed programs expressed in the new notation, or we can build a compiler to translate programs expressed in the new notation into any other programming language.

There’s also a synopsis of the implementation, in Scheme, of a rule language geared at simplifying algebraic expressions, and the accompanying interpreter. The conclusion is that you want to write languages and interpreters, in a variety of paradigms; and that you probably want to write them using Lisp:

The truth is that Lisp is not the right language for any particular problem. Rather, Lisp encourages one to attack a new problem by implementing new languages tailored to that problem. Such a language might embody an alternative computational paradigm [...] A linguistic approach to design is an essential aspect not only of programming but of engineering design in general. Perhaps that is why Lisp [...] still seems new and adaptable, and continues to accommodate current ideas about programming methodology.

As true today as it was twenty odd years ago, if you ask me.

Scheme lectures, mostly

Here’s a list of (mostly) scheme video lectures, based on the links posted in this thread of the PLT mailing list, with some extra bits taken from my own collection:

  • SICP lectures by Abelson and Sussman (see also here for more lightweight versions), still my all-time favourite lecture series on any field.
  • Also based on SICP, Brian Harvey’s course on UCB is quite fun (and, while you’re at it, you may find interesting other UCB courses too). And the ADU course comes with lots of additional notes and materials, although i’ve only skimmed over it and i don’t really know how good it is. But let me tell you that, if it’s as good as Shai Simonson‘s course on Theory of computation, it’s probably worth your time.
  • All videos from this year’s ICFP are available online. Among them, one can find Jay McCarthy’s talk on RESTful webapps in Scheme using serializable continuations; Matthew Flatt on PLT’s Scribble documentation system; Matthias Felleisen on how he and his collaborators are using purely functional programming for teaching kids and making them have fun; and Ralf Hinze’s La Tour D’Hanoï, which is not about scheme but is a pearl anyway (as is Ryan Newton’s report on how he used functional programming to implement an embedded bird detector via a parallel DSL with some metaprogramming for a good measure).
  • During the latest GNU Hackers meeting, Andy Wingo talked about recent developments on Guile (bittorrent file).
  • Robby Findler on Why macros matter is a little nice introduction on how one can use macros not only in cases that lazy languages handle gracefully, but, more importantly, as a full-fledged language definition device.

  • A clip featuring Shriram Khrisnamurthi, where he introduces WeScheme, a pretty interesting scheme-in-a-browser environment based on PLT’s Moby platform, which Shriram presented at the latest ILC; although there’s no video of that talk available, you can get the slides and a sound recording here (as is usually the case with Shriram, definitely worth your time).
  • A talk by Matthias Felleisen on the evolution of Northeastern University’s CS curriculum, where scheme plays a central role.

  • The DanFest videos, which i’ve mentioned before one or two times. Virtually all lectures in this series are worth watching, but, if Robby’s talk above picked your curiosity, don’t miss Ken Dyvbig’s Macro Writer’s Bill of Rights video and slides.


    And i cannot help mentioning neither Gerry Sussman’s The role of programming in the formulation of ideas


    (and its accompanying article) nor Oleg Kiselyov’s Normal-order syntax-rules and proving the fix-point of call/cc:

  • As an aside, if you like Gerry as much as i do, you might be interested in hearing him sharing his enthusiasm for mechanical watches for a change. Or, if insist staying (more or less) on topic, take a look at his The Legacy of Computer Science, for Gerry’s take on how CS is making us smarter.
  • Guy Steele’s Designing by Accident is a very interesting history on how Scheme came to life and its relationship with actors:


    for which you’ll need the accompanying slides. Although not specifically about Scheme, Steele’s Growing a language is also a must see.

  • You will have to tell me how good Matthiew Flatt’s Processes without partitions is, because it seems to be available only in formats that i cannot play on debian.
  • In The 90 minute Scheme to C compiler talk Mark Feeley shows how closure conversion and CPS can be used to put together a Scheme compiler (ninety minutes being the time needed to explain it, mind you).

  • Those of you new to scheme and functional programming may enjoy this lecture by Jerry Cain, pertaining to Stanford’s course on Programming Paradigms, which discusses functional programming using Kawa scheme. Not stellar, but not bad as an introduction.

  • Finally (for now), some fun livecoding with Fluxus:


    And don’t miss this gallery for some really beautiful ones.

Posted in Scheme. 9 Comments »

The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Here’s to Andy

Guile 1.9.0, the first in a series of alpha releases leading to 2.0 by the end of this year, has just been released. The number of improvements listed in the NEWS is impressive: you may well find that most, if not all, of your pet peeves against Guile are a thing of the past, and also relish one or three features that are hard to find in other schemes. As for me, after working with the development version for some months now, i’m sold. Here are some of my favourites goodies:

The new compiler and virtual machine:

** Guile now can compile Scheme to bytecode for a custom virtual machine.

Compiled code loads much faster than Scheme source code, and runs around
3 or 4 times as fast, generating much less garbage in the process.

which comes with automatic caching of object files, and provides a language tower that is beginning to fulfill Guile’s foundational goals:

** New language: ECMAScript

Guile now ships with one other high-level language supported,
ECMAScript. The goal is to support all of version 3.1 of the standard,
but not all of the libraries are there yet. This support is not yet
documented; ask on the mailing list if you are interested.

Yes, Guile has now multi-language support and, what’s more important, a clearly defined way of adding new languages to the mix (full Elisp support comming).

What must be the most infamous Guile misfeature is over:

** The psyntax expander is now hygienic with respect to modules.

Free variables in a macro are scoped in the module that the macro was
defined in, not in the module the macro is used in. For example, code
like this works now:

   (define-module (foo) #:export (bar))
   (define (helper x) ...)
   (define-syntax bar
     (syntax-rules () ((_ x) (helper x))))

   (define-module (baz) #:use-module (foo))
   (bar qux)

It used to be you had to export `helper' from `(foo)' as well.
Thankfully, this has been fixed.

thanks (thanks, thanks!) to the new psyntax:

** psyntax is now the default expander

Scheme code is now expanded by default by the psyntax hygienic macro
expander. Expansion is performed completely before compilation or
interpretation.

Notably, syntax errors will be signalled before interpretation begins.
In the past, many syntax errors were only detected at runtime if the
code in question was memoized.

with such additional niceties as allowing docstrings in macros (docstrings is problably the Guile extension over standard scheme that i like most). Note also that now Guile knows about syntax-case and syntax-rules by default, no need to import additional modules to get that.

And, to finish my laundry list, some new modules for good measure:

* New modules (see the manual for details)

** `(srfi srfi-18)', more sophisticated multithreading support
** `(ice-9 i18n)', internationalization support
** `(rnrs bytevector)', the R6RS bytevector API
** `(rnrs io ports)', a subset of the R6RS I/O port API
** `(system xref)', a cross-referencing facility

The last one, in particular, being put to good use in Geiser. And, by the way, ‘multithreading’ here means native multithreading.

While it’s true that there are still some bugs and rough edges to be smoothed, the future looks bright for Guile, and 2.0 will be a landmark in its history. All these goodies are the result of the combined effort of several hackers, but, without meaning to belittle any of them, i must bow to the amazing hacking powers of Andy Wingo. During this last year, almost single-handedly, he has brought us the compiler and virtual machine, the ECMAScript language, and the psyntax expander. I must remember to introduce myself as “Andy’s coworker” more often!

Happy hacking!

Posted in Scheme. 2 Comments »

A merry gang

Last Wednesday, FLIB’s kick-off meeting took place at Oblong’s Barcelona lab, with a dozen deliciously crazy people attending.

Due to lack of time and seriousness on my side, the main talk was given by Jos, who gave us an overview of his work with PLT Redex to model lambda calculus. Redex provides an embedded DSL to create context-sensitive term-rewriting systems, if you’ll pardon my buzzwording. In a hand-waving nutshell, term-rewriting systems are syntax-rules on steroids: one specifies a set of rules for transforming (rewriting, or reducing) terms to other terms according to their structure, possibly depending on context. Jos has a very nice example of such a system taken from GEB (Best. Book. Ever.), the MIU formal system, whose formal rules can be expressed in Redex as:

  (--> (‹symbol›  ... I) (‹symbol› ... I U))
  (--> (M ‹symbol› ..) (M ‹symbol› ... ‹symbol› ...))
  (--> (‹symbol›_0 ... I I I ‹symbol›_1 ...) (‹symbol›_0 ... U ‹symbol›_1 ...))

that is, if you find a trailing I, you can append U; if you find M, you can duplicate the rest of the string; and three consecutive Is can be reduced to a single U. Now, you can start with a given string (or axiom) and apply the rules to produce new ones (theorems). Note how the rules are contextual, and how there’s in general more than one that is applicable. Redex will do that for you, creating a tree with all possible reductions.

MIU reductions in Redex

Of course, there’s more to Redex than this simple example. For instance, it’s been used to provide an operational semantics for R6RS. Jos’ work is somewhere in the middle: while the reduction rules in lambda calculus are even simpler than in MIU, issues of scope quickly complicate things; moreover, Jos explores classical topics in lambda calculus, such as reduction to normal form, fixed point combinators or Church numerals to name a few, always using Redex (the staggering conceptual richness embodied by the humble premises of lambda calculus always amazes me). All in all, a beautiful 32-pages long paper, with accompanying code, that serves as a nice hands-on introduction to both lambda calculus and Redex, and which you can get at Jos homepage.

After the presentation, we devoted some time to talk about the future of FLIB. Monthly presentations, lightning talks on demand and a reading group. I like the latter a lot, because having a physical meeting among readers every month is an excellent way of keeping reading groups alive. We’re still deciding on our first book, but PLAI followed by LiSP seems to be gaining momentum right now. Another nice thing about the reading group is that it opens the possibility for people not able to come to the meetings to participate: just subscribe to our mailing list and join the discussions about the book du jour.

And then we just sat down around a table with some beer and snacks, and start talking about life and programming languages. I found it very stimulating because of the varied people’s backgrounds: we had guys from academia and industry; ones just starting their graduate courses, others with twenty years of teaching under their belts; people from several different countries; schemers obsessed with call/cc, smalltalkers, python experts, C++ loathers and programmers who secretly enjoy it, perlmongers and ruby or haskell aficionados. But, they all, people with a passion for programming: i think everybody was happy to have found a bunch of keen souls.

Perhaps it was inevitable that much of the discussion gravitated around our frustrations as programmers and teachers, given the sad state of computer science in both industry and academia, and the insurmountable barriers for adoption faced by the kind of languages we like. But, with that out of the way, here’s hope (as expressed by Andy after the meeting) that future meetings will concentrate on brighter fields.

A great evening, and no mistake. I hope you’ll join the fun next July 22nd!

flib

Update We’ve moved the date of our first meeting to June 17th, so you’re still in time to join us! If you want to follow our adventures, you can also ask for an invitation to our mailing list.

The other day, Andy and I met Jos, an experienced schemer who lives near Barcelona, with the idea of having lunch, talking about Scheme, and create a Scheme Users Group. After a bit of discussion, we agreed on widen the group’s scope, and start what we’re calling Fringe Languages In Barcelona (FLIB). The plan is to conduct periodic meetings with a main presentation followed by some lightning talks (the latter were a complete success at ILC, and we’d like to try and see how they work for us), with as much discussion interleaved as we see fit. We’ll have some refreshments available and, since we’re meeting in the very center of the old city, visits to pubs or a restaurant for dinner and further socializing are to be expected.

As i said, we’re expecting much discussion about Scheme and Lisp, but we’re not ruling out by any means other fine languages. For instance, the talk for the inaugural session (scheduled June 10th17th, 7:30 pm) is entitled The implementation of FUEL, Factor’s Ultimate Emacs Library, and it will include a short introduction to Factor (yes, i am the victim speaker). Jos will come next, the same day, with a lightning talk about PLT Redex. We have free slots for more lighting talks: you are invited not only to come, but to give one if you’re so inclined. This being our first meeting, there will be also some time for logistics and organisation.

So, if you’re near here by then, by all means, come in and join the fun:

Calle del Pi 3 Principal Interior (first floor)
Barcelona

Not really needed, but if you’re thinking about coming, sending me a mail beforehand will help us to be sure that we’ve got enough food and drinks.

We’re looking forward to getting FLIB started, and we’re sure that at least grix more fringers are coming! Don’t miss it!

Geiser

Update: Since a few months ago, Geiser has its own home in the interwebs.

I hope you’ll pardon a shameless plug of one of my latest hacks, Geiser, a new Emacs-Scheme interaction mode.

After having lots of fun implementing Fuel, i was left with a lot of Elisp code that, i realized, could be easily reused for languages other than Factor. I also decided that it was high time to stop whining about Scheme environments not being dynamic enough and do something about it. As they say, talk is cheap.

Thusly, Geiser was born, and today it came of 0.0.1 0.0.2 age, as per the git tag in its repository.

If you know Slime or Fuel, you know what Geiser aims at: a pleasant, live interaction with Scheme without leaving Emacs. This first release is by no means there yet, but you’ll already find some joy using it: module-aware and incremental evaluation, jumping to definitions, dynamic symbol completion and automatic signature display in the echo area are the highlights.

Currently, Geiser supports two Scheme implementations: Guile and PLT. Yeah, i like both (and several others). It’s been really fun discovering how to tweak them to obtain the metadata i wanted, and their developers and users have been helpful, kind and patient to no end. A big thanks to them (you know who you are), and my promise that i’ll keep nagging.

Both Guile and PLT have given me many pleasant surprises. Guile is the most common-lispy Scheme around, and the recent hard work and improvements by the likes of Andy Wingo is making much of the criticism it memetically receives just moot. And PLT is by no means the rigid system i thought it was, while retaining all the great features i knew it had. Try any of them, with or without Geiser: they’re real fun.

Back to Geiser, this being an alpha release, there’s no screencasts or real documentation… the code just escaped leaving a blood trail, you know. Maybe one day it’ll have a webpage, a mailing list and even users. In the meantime, if you’re brave enough, the README will hopefully do; and, of course, the code:

  git clone git://gitorious.org/geiser/mainline.git

(If you’re not brave enough, but curious, the code is browsable here.)

Needless to say, all kinds of comments, criticisms and laundry lists are welcome and, actually, encouraged.

Happy scheming!

From my cold, dead hands

Back at the PLT Scheme blog, Matthias Felleisen has just posted an entry on the rationale behind the limitations imposed to DrScheme’s REPL, namely, the inability to evaluate individual expressions. If you’ve read this blog before, you won’t be surprised to know that, as much as i respect Matthias, i totally disagree with his rationale. Well, not totally: let me explain. Matthias’ point is that allowing on the go redefinitions often leads to inconsistent states of the running image, which in turn leads to confusion and much head scratching, even when that head belongs to, say, Dan Friedman (let alone Matthias’ students’ heads, or mine, for that matter). Therefore, we’d better ban the offending feature from the development environment, right?

I’m willing to concede that evaluation of individual expressions is as dangerous as Matthias’ experience suggests, or even demonstrates. But i strongly take issue with his conclusion. Incremental development might be dangerous, but it’s also a powerful tool, and incredibly fun (all three come together most of the time, don’t they?) as any Common Lisp, Factor, Smaltalk, Scheme48 or Guile (to mention a few) hacker will tell you. As yours truly has been telling you since this blog started. Or PLT hackers, for that matter: Matthias’ post originates in this interesting email exchange, with people giving good reasons on behalf of an incremental REPL (think, for instance, of manipulating large chunks of intermediate data). I want this power. I want the fun. I’m willing to take the risk.

In my opinion, this is a good illustration of one of the points that Gerry Sussman was making in his talk on robust systems at ILC. For instance, injudicious use of generic functions will quickly paint you into a corner; but, at the same time, this sharp knife will cut for you beauties such as the scmutils library used in SICM, or help you in taming evolving requirements. Should we be deprived of generics for our own good?

Now, don’t take me wrong. I immensely appreciate the warnings and advice of people who, as Matthias, have much more experience and knowledge under their belts than i’ll ever have. I’m willing (eager, even) to learn from them. But i don’t understand why warnings should become prohibitions. I’m not dissing the work of Robby implementing DrScheme’s ‘restart-the-world’ functionality: it’s great to have it there (in fact, having it there as a safety net is yet another reason for allowing the incremental REPL variety). And i would have no objection if not having an incremental REPL were due to technical implementation details. What i fail to see is why undercutting the functionality of my tools in the name of ‘my own good’ is the right thing to do. I hated it when my parents did that to me ;-).

Sussmaniana

I’m back from ILC 09, slowly digesting all i lived there. This was my first Lisp conference and my first visit to MIT, a place marked with red big letters in the atlas of my private mythology. And it wasn’t only about places: suddenly realizing that you’re sitting next to Richard Greenblatt, or enjoying Gerry Sussman’s talks in the flesh, was quite an experience, with an almost eerie feeling attached to it.

There’s a problem in meeting your myths: real life is almost never up to the task of meeting one’s idealizations. But this was not the case; i enjoyed every minute, not a tinge of disappointment to be felt. Lisp has been around for quite a while and its history is an important part of computer science’s history. That history comes life in the ILC, and you get a chance to share with the people that were there back in the day, the people you read about in books… i’ve wished many times i was there, and these days in Cambridge i’ve been as close to those halcyon days as i can expect to ever be. Living history, what a feeling.

Had i to single out just one speaker, that’d have to be Gerry Sussman. I just kept finding myself resonating in a deep way with his thoughts. For instance, during the panel on the future of Lisp, conversation revolved around how to keep the language popular and apt for commercial applications. Gerry stepped out to point that that was all very well, but that we shouldn’t forget that one of the key aspects of programming languages is to what extent they allow us to extend our problem-solving abilities by providing new ways of expressing and talking about problems (as Dijkstra once said: Lisp has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts). Dead on, if you ask me, although unfortunately nobody seemed to have anything else to add, and the debate returned to the far less interesting popularity issues (well, yes, the conference wasn’t perfect after all).

Next day we were in a kind of tongue-in-check debate provocatively entitled Are macros a menace?. Richard Gabriel was on the wrong side, and arguing along the lines that macros were akin to language design and that he’d rather not suffer the consequences of letting your average software engineer undertake such a complex task. Gerry’s intervention at this point made me nod again: if we cannot trust our software enginneers to proficiently use the tools of our trade, there must be something wrong in the way we educate them; only those able to judiciously use them should get a diploma, to begin with. That’s exactly how i felt during my period as a CS teacher, as i tried, rather clumsily, to explain in one of my first rants in this blog. It feels great to be in such a good company.

Then there was this unplanned mini-talk on why MIT has replaced the SICP-based introduction to programming with something about robots using Python. You can read a nice synopsis of the reasons Sussman gave in Andy’s blog (together with summaries and comments of many other talks). It was nice in a kind of sad way: at the end, while answering a question, Gerry mentioned that this new computing world was not his, and it wasn’t one that he liked [0]. ‘But’, he said, ‘that’s because we’re old farts’. Although i’m younger than him, i like much better the kind of world that gave rise to SICP, and spent the rest of the evening feeling like a sad, old fart. I must say, however, that the ideas SICP grew upon, that simple world where you built up complexity out of small pieces and system that you understood completely, have much to offer to new generations. We should not dismiss them in the name of modernity.

Besides those cameos, Gerry had two official appearances in the regular conference programme, namely, one invited speech on robust systems design and a lightning talk. (The latter were 5-minutes long presentations–with Guy Steele and his chronometer strictly ensuring that that was actually the case–followed by a 2 minutes Q&A part, while the next speaker was setting up her laptop. This formula worked great most of the time, forcing the presenters to squeeze their brains in order to capture as much content, sense and, in most cases, fun as possible in such a short time. We had a living confirmation that working under severe constraints is a great creativity boost.)

In the invited speech, we had the opportunity of hearing more about Gerry’s ideas on what makes a system robust. I say ‘more’ because he made public a paper on the subject some time ago, and, actually, his ideas on these issues can be traced back to, for instance, the SICP video lectures, where he already exposes the general strategy to tackle the problem: in order to make a system robust, you don’t want to solve a strict, narrowly specified problem, but a whole family of them (or, if you have a very crisp specification, a class of problems in its neighbourhood). That way, when the problem to be solved varies in small ways, your whole solution accommodates to the new situation by small variations. The solution is not brittle. To attain such a flexible behaviour, we need to build our system upon components that are lenient on the inputs they accept and as sharp as possible in the outputs they produce. Ways to attain the above goals include metalinguistic abstraction (creating a language tailored to the problem domain), use of generic interfaces, degeneracy or non-deterministic search in the solution space.

Generic functions were nicely demonstrated with examples from the library used in SICM (and the delicious Functional Differential Geometry). We got the warning that using generics this way is dangerous. But they’re powerful, and one needs powerful methods to solve challenging problems; one needs to know what one’s doing, but that’s part of what we’re supposed to be good at, right? Sussman kept smiling and saying ‘beware, this is a very dangerous thing to do’. There was also an almost off-the-record comment to the effect that one of the missed opportunities in Lisp’s design was its not defining all its primitive forms as generics (converting it definitely in the most dangerous language around).

Degeneracy (using completely different ways for computing the same result) was illustrated, as much of Sussman’s thinking on robust programming, by examples drawn from biological systems (in this talk, it was frogs most of the time). You can find many other examples of this sort of parallelism between computing and biological systems in the paper linked above, a line of thought that i find mixes well with the handful of metaphors i use to reason about my job as a programmer. In particular, it sort of connects with my bias towards living systems such as Lisp or Smalltalk’s, where one is evolving more than designing and implementing the program; which in turn mixes well with the ideas of latent typing and late binding, present in those highly dynamic environments (Self, Factor, and APL (as demonstrated in this game of life video (unintended pun intended)) are in the same league). Much more than with beautiful but extremely rigid ones based on static typing, such as Haskell’s. (That’s why i keep coming back to Lisp after my Haskell excursions, or why i find R6RS so disappointing. Or, if you’ll pardon my keeping on mixing methaphors, why i prefer healing rather than practising autopsies.)

A constraint propagation systemAnother venue to flexibility mentioned in the talk are constraint propagation networks in which multiple sources contribute to defining the values of each state variable. You get that way the possibility of partially defined values, that can be nonetheless useful by themselves, depending on the computation you’re performing. Propagator networks also work as additive computation machines able to refine coarse inputs into correct solutions for problems specified as a set of constraints. One of Sussman’s students, Alexey Radu, is actively working on propagators, building on work inititated back in the day by Guy Steele. You can find an extensive report and nice, working Scheme code here.

Poincare sectionFinally, Gerry gave a lightning talk with yet another piece of food for thought. The rub of it was drawing our attention to the possibility of exploiting a posited parallelism between the theory and methods to solve differential equations on the one hand, and programs on the other. There’s a way of approaching solving a differential equation that is, if you will, algebraic in nature: one manipulates algebraically expressions to simplify and eventually obtain a closed form solution, or, if that’s not possible, creates numerical approximations to evolve the boundary conditions in the state space as a function of discrete time steps. You end up that way with something that works as a solution, but, most of the time, without a deep understanding of the traits that make it a solution: in the spirit of the robust design ideas sketched before, we should probably be asking for more qualitative information about how solutions behave as we change the boundary or initial conditions of our problem. As it happens, matematicians have a way of analyzing the behaviour of solutions to differential equations by studying their Poincaré maps and sections, which are views into the orbits followed by the solutions in their state space. Many times, you don’t have to solve exactly the differential equation to predict its qualitative behaviour (e.g., is it bounded? is it periodic?) in state space, and get insight on how it changes in presence of perturbations. The analogy with computing processes is clear: most of the time, we narrow our efforts in finding, so to speak, algebraic solutions to our computing problems: the program-as-text is the analog to the process of finding an exact formula for the solution of a differential equation. What we’re missing, according to Sussman, is a way or reasoning about the qualitative features of our programs à la Poincaré, i.e., a way or reasoning about programs in the state space of their outputs, beyond the mechanistic algorithmical details. Gerry admitted that he didn’t know how or in what form this kind of reasoning would proceed, only that his hunch is that it must possible, and exhorted us to find the way.

ILC09 would have been worth it only for those talks, but there was much more: don’t miss the next one!


[0] As rightly pointed out by Dan Weinreb in his comment below, Sussman endorses the changes in the new curriculum, though. His post on this issue is worth reading too.

Follow

Get every new post delivered to your Inbox.

Join 26 other followers