Comprehensive Hutton

After yawning during the first 10 installments of the series, i’ve just watched a really beautiful talk (with accompanying slides and literate code) by guest star Graham Hutton in the C9 Lectures webcast [0]. Graham devotes it to show how to solve the countdown puzzle for numbers (find ways of combining n integers to obtain a target one, using basic arithmetic operations) in Haskell. The lecture is a very nice show off of list comprehensions and how they allow the elegant expression of solutions to certain kind of problems (basically, tree searches in this case), without precluding incremental optimizations (pruning of the search space in the problem at hand). In addition, you’ll see a very simple, but enlightening, example of program fusion as a technique to boost performance. The code is beautiful and straightforward and, most notably, retains those qualities even after optimization. Hutton is a good speaker and teacher, and the lecture is accessible to anyone with a bare-bones knowledge of Haskell. And the fun doesn’t end with the video: there’s also an accompanying paper, most aptly included in the functional pearls series, providing all the details, code and proofs. A discussion is also present in Graham’s book, which, to judge from the quality of the paper, must be an excellent way to introduce yourself to Haskell and functional programming.

—-
[0] I keep reading praise for these lectures, and everybody seems to adore E. Meijer. Yet i must confess that i find these lectures incredibly bad: to my taste, the lecturer is dull, often misses the point, keeps drawing dubious parallelisms and, every now and then, just makes plain (and, i’m tempted to say, glaring) conceptual errors. At the risk of losing all my credibility, if any, i wholeheartedly recommend you not to spend any time in the first ten lectures of this series. I’d take any talk by SPJ any day: that is passion and insight. Or by Hutton, for that matter.

The (PLT) future is here

James Swaine has just announced the availability of futures for mzscheme. Still work in progress, this is the first step in native-thread support for PLT Scheme, in the form of a parallelisation library. More concretely, we learn from the documentation that a future “API represents a best-effort attempt to execute an arbitrary segment of code in parallel”. One creates a future by passing it a thunk, which starts immediately as a parallel process. Non-parallelisable procedures are detected, and cause the independent thread to block until the main thread touches it: one still needs to write some code to orchestrate the parallel gig. A bit rough, but this is just a pre-alpha release: i’m sure things will only get better and better!

As mentioned, this new functionality is only available for mzscheme, so you’ll need to checkout the code from svn and configure the tree with an incantation along the lines of:

$ mkdir build
$ cd build
$ ../src/configure --enable-futures --disable-mred
$ make && make install

That worked for me. Afterwards, i wrote the CPU burner suggested by James:

#lang scheme

(require scheme/future)

(define (loop) (loop))
(define (run)
  (for-each
   touch
   (for/list ([i (in-range 0 (processor-count))])
             (future loop))))
(run)

and, lo and behold, mzscheme cpu-burner.ss is making my two little cores beat at top speed.

Of course, haskellers will be hardly moved: runtime support for multicores using sparks is already available in ghc, with a more robust implementation.

Happy parallel hacking!

Enjoying Haskell

I’ve been reading about Haskell quite a bit during the last months, writing some actual code, and liking the language more and more. After many years favouring dynamically typed languages, i’m beginning to really appreciate Haskell’s type system and the benefits it brings to the table.

A common argument from the static typing camp is that the compiler is catching a whole class of bugs for you, to which dynamic types answer that a good test suite (which you need anyway for any serious development) will catch those relatively trivial bugs for you. I tend to agree with the dynamic faction on this issue, but then i think that the strength of static typing (coupled with good type inference) is not at all about the compiler catching typing bugs but, rather, as enforcing useful constraints. When you write Haskell, you have to think hard about your data types and the functions using them; and the compiler will keep complaining and, most importantly, the code will feel awkward and somehow ad hoc until you find a good set of types to solve your problem.

The limits to your freedom imposed by the type system entail, in my experience, a boost in the amount of thought and imagination that i put in my design and implementation, in much the same way as the constraints imposed by metric and rhythm to poetic writing boost creativity and actually help producing a beautiful text. Or, in fact, in the same way as any kind of constraint in any creative endeavour helps (paradoxically, at first sight) in attaining beauty, or, at least, in having fun during the process.

In my experience, the process of writing a program or library in any language is always a struggle to find the right way of expressing the solution to a problem, as the culmination of a series of approximations. The code feels better, more expressive and easy-flowing with each rewrite, until something just clicks and i get this feeling that i’ve finally captured the essence of the problem (a litmus test being that then it’s natural to extend the solution to cases i hadn’t thought of when writing the solution, as if, somehow, the new solutions were already there, waiting for you to discover them). And i’m finding that the powerful type system offered by Haskell (think not only of vanilla Hindley-Milner, but also of extensions such as GADTs or type families) is helping me reaching the (local) optimum quicker, that satisfying constraints means i’m closer to the final solution when my code compiles for the first time. You often hear Haskell programmers saying something similar (“once my code compiles, it works”), and i think it’s mostly true, except that the real reason is not that the compiler is catching trivial typing bugs, but, rather, that the constraints imposed by the type system are making you think harder and find the right solution. Same thing with monads, and the clean separation they provide for stateful computations: again, you must think carefully about the right combination of monads and pure code to solve the problem, and most of the time your code will simply not type check if you don’t get the solution right.

There are two more ways that Haskell’s type system is helping me writing better programs. Two ways that are especially poignant when the code becomes sizeable enough. The first one is self-documentation: seeing the type of my functions (or asking the interpreter for them) instantly informs me of almost everything i need to know to use them; in fact, when writing in dynamic languages i keep annotating function signatures with this same information, only that there i’m all by myself to ensure that this information is right. PLT contract system is but a recognition of the usefulness of typing in this regard, although i much prefer the terseness and notational elegance of Haskell’s type signatures over the much more verbose and, to my eyes, somewhat clunky notation used by PLT (which is not really PLT’s fault, being as it is a very schemish notation). Let me stress here that having a REPL such as ghci is a god-send (and, to me, a necessity for really enjoying the language): it will tell me the type of an expression in much the same way as decent Lisp or Scheme environments will report a function’s signature.

The second way Haskell’s lending a helping hand with non-trivial code base is refactoring. As i mentioned above, i rewrite my programs several times as a rule, and rewrites almost always involve modifying data structures or adding new ones. As i grow older, i find it more and more difficult to keep in my head all the places and ways a given data structure is used in my programs, and with dynamic languages i’m often falling back to grepping the source code to find them. And again, their plasticity often works against me, in that they let me use those data structures in crooked ways, or forget to take into account new fields or constructors for a modified data type. Haskell’s compiler has proved an invaluable ally to my refactorings and, by comparison, modifying and maintaining my bigger dynamic programs is not as fun as it used to be.

As an aside, types are not the only thing i’m finding enjoyable about Haskell. Its astonishing capabilities to express very abstract problems with a remarkable economy of expression (due, in part, to its highly tuned syntax) are extremely useful. To my mind, they mimic the process by which in math we solve harder and harder problems by abstracting more and more, cramming together more relevant information in less space (some cognitive science writers will tell you that thought and even consciousness consists on our ability to compress information). That means that i can express my solutions by capturing them in very high level description: initially, that makes them harder to understand, but once i feel comfortable with the basic concepts and operations, they scale up much, much better than more verbose, less sophisticated ones. Using these new hard-earned concepts, i can solve much harder problems without adding to the complexity of the code in a significant way (one could say, using a loose analogy, that the solutions grow logarithmically with complexity instead of polynomically or exponentially). A direct consequence of this expressiveness is that some well-written Haskell programs are, hands down, the most beautiful pieces of code i’ve ever seen (just pick a random post at, say, a Neighbohood of Infinity and you’ll see what i mean; or read Richard Bird’s Sodoku solver and compare his solution with one written in your favourite programming language).

Finally, let me say that i find programming in Haskell more difficult than programming in any other language i’ve used, with perhaps the only exception of Prolog. Sometimes, considerably so. And that’s perfectly fine with me. For one thing, it makes it more interesting and rewarding. In addition, i’m convinced that that’s the price to pay for being able to solve harder problems. I take issue with the frequent pleas to the effect that programming should be effortless or trivial: writing good programs is hard, and mastering the tools for doing it well takes, as with any other engineering or scientific discipline, hard work (why, i don’t heard anyone complaining that building bridges or computing the effects of gravitational lensing is too difficult). There’s no silver bullet.

All that said, please don’t read the above as an apostasy letter announcing the embracement of a new religion. There’s still much to be said in favour of dynamic languages, specially those in the Lisp family, whose malleability (fostered by their macro systems) is also a strength, in that they allow you to replicate some of the virtues i’ve been extolling in this post. Haskell lacks the power of homoiconicity, its template mechanisms feeling all but cranky, and that’s a serious drawback in some contexts (i have yet to decide how serious, as i have yet to decide how much i’m missing in reflection capabilities). As always, it is a matter of trade-offs and, fortunately, nobody will charge you for high treason for using the language better fit to the problem at hand, or so i hope.

Here’s to Andy

Guile 1.9.0, the first in a series of alpha releases leading to 2.0 by the end of this year, has just been released. The number of improvements listed in the NEWS is impressive: you may well find that most, if not all, of your pet peeves against Guile are a thing of the past, and also relish one or three features that are hard to find in other schemes. As for me, after working with the development version for some months now, i’m sold. Here are some of my favourites goodies:

The new compiler and virtual machine:

** Guile now can compile Scheme to bytecode for a custom virtual machine.

Compiled code loads much faster than Scheme source code, and runs around
3 or 4 times as fast, generating much less garbage in the process.

which comes with automatic caching of object files, and provides a language tower that is beginning to fulfill Guile’s foundational goals:

** New language: ECMAScript

Guile now ships with one other high-level language supported,
ECMAScript. The goal is to support all of version 3.1 of the standard,
but not all of the libraries are there yet. This support is not yet
documented; ask on the mailing list if you are interested.

Yes, Guile has now multi-language support and, what’s more important, a clearly defined way of adding new languages to the mix (full Elisp support comming).

What must be the most infamous Guile misfeature is over:

** The psyntax expander is now hygienic with respect to modules.

Free variables in a macro are scoped in the module that the macro was
defined in, not in the module the macro is used in. For example, code
like this works now:

   (define-module (foo) #:export (bar))
   (define (helper x) ...)
   (define-syntax bar
     (syntax-rules () ((_ x) (helper x))))

   (define-module (baz) #:use-module (foo))
   (bar qux)

It used to be you had to export `helper' from `(foo)' as well.
Thankfully, this has been fixed.

thanks (thanks, thanks!) to the new psyntax:

** psyntax is now the default expander

Scheme code is now expanded by default by the psyntax hygienic macro
expander. Expansion is performed completely before compilation or
interpretation.

Notably, syntax errors will be signalled before interpretation begins.
In the past, many syntax errors were only detected at runtime if the
code in question was memoized.

with such additional niceties as allowing docstrings in macros (docstrings is problably the Guile extension over standard scheme that i like most). Note also that now Guile knows about syntax-case and syntax-rules by default, no need to import additional modules to get that.

And, to finish my laundry list, some new modules for good measure:

* New modules (see the manual for details)

** `(srfi srfi-18)', more sophisticated multithreading support
** `(ice-9 i18n)', internationalization support
** `(rnrs bytevector)', the R6RS bytevector API
** `(rnrs io ports)', a subset of the R6RS I/O port API
** `(system xref)', a cross-referencing facility

The last one, in particular, being put to good use in Geiser. And, by the way, ‘multithreading’ here means native multithreading.

While it’s true that there are still some bugs and rough edges to be smoothed, the future looks bright for Guile, and 2.0 will be a landmark in its history. All these goodies are the result of the combined effort of several hackers, but, without meaning to belittle any of them, i must bow to the amazing hacking powers of Andy Wingo. During this last year, almost single-handedly, he has brought us the compiler and virtual machine, the ECMAScript language, and the psyntax expander. I must remember to introduce myself as “Andy’s coworker” more often!

Happy hacking!

Posted in Scheme. 2 Comments »

A merry gang

Last Wednesday, FLIB’s kick-off meeting took place at Oblong’s Barcelona lab, with a dozen deliciously crazy people attending.

Due to lack of time and seriousness on my side, the main talk was given by Jos, who gave us an overview of his work with PLT Redex to model lambda calculus. Redex provides an embedded DSL to create context-sensitive term-rewriting systems, if you’ll pardon my buzzwording. In a hand-waving nutshell, term-rewriting systems are syntax-rules on steroids: one specifies a set of rules for transforming (rewriting, or reducing) terms to other terms according to their structure, possibly depending on context. Jos has a very nice example of such a system taken from GEB (Best. Book. Ever.), the MIU formal system, whose formal rules can be expressed in Redex as:

  (--> (‹symbol›  ... I) (‹symbol› ... I U))
  (--> (M ‹symbol› ..) (M ‹symbol› ... ‹symbol› ...))
  (--> (‹symbol›_0 ... I I I ‹symbol›_1 ...) (‹symbol›_0 ... U ‹symbol›_1 ...))

that is, if you find a trailing I, you can append U; if you find M, you can duplicate the rest of the string; and three consecutive Is can be reduced to a single U. Now, you can start with a given string (or axiom) and apply the rules to produce new ones (theorems). Note how the rules are contextual, and how there’s in general more than one that is applicable. Redex will do that for you, creating a tree with all possible reductions.

MIU reductions in Redex

Of course, there’s more to Redex than this simple example. For instance, it’s been used to provide an operational semantics for R6RS. Jos’ work is somewhere in the middle: while the reduction rules in lambda calculus are even simpler than in MIU, issues of scope quickly complicate things; moreover, Jos explores classical topics in lambda calculus, such as reduction to normal form, fixed point combinators or Church numerals to name a few, always using Redex (the staggering conceptual richness embodied by the humble premises of lambda calculus always amazes me). All in all, a beautiful 32-pages long paper, with accompanying code, that serves as a nice hands-on introduction to both lambda calculus and Redex, and which you can get at Jos homepage.

After the presentation, we devoted some time to talk about the future of FLIB. Monthly presentations, lightning talks on demand and a reading group. I like the latter a lot, because having a physical meeting among readers every month is an excellent way of keeping reading groups alive. We’re still deciding on our first book, but PLAI followed by LiSP seems to be gaining momentum right now. Another nice thing about the reading group is that it opens the possibility for people not able to come to the meetings to participate: just subscribe to our mailing list and join the discussions about the book du jour.

And then we just sat down around a table with some beer and snacks, and start talking about life and programming languages. I found it very stimulating because of the varied people’s backgrounds: we had guys from academia and industry; ones just starting their graduate courses, others with twenty years of teaching under their belts; people from several different countries; schemers obsessed with call/cc, smalltalkers, python experts, C++ loathers and programmers who secretly enjoy it, perlmongers and ruby or haskell aficionados. But, they all, people with a passion for programming: i think everybody was happy to have found a bunch of keen souls.

Perhaps it was inevitable that much of the discussion gravitated around our frustrations as programmers and teachers, given the sad state of computer science in both industry and academia, and the insurmountable barriers for adoption faced by the kind of languages we like. But, with that out of the way, here’s hope (as expressed by Andy after the meeting) that future meetings will concentrate on brighter fields.

A great evening, and no mistake. I hope you’ll join the fun next July 22nd!

flib

Update We’ve moved the date of our first meeting to June 17th, so you’re still in time to join us! If you want to follow our adventures, you can also ask for an invitation to our mailing list.

The other day, Andy and I met Jos, an experienced schemer who lives near Barcelona, with the idea of having lunch, talking about Scheme, and create a Scheme Users Group. After a bit of discussion, we agreed on widen the group’s scope, and start what we’re calling Fringe Languages In Barcelona (FLIB). The plan is to conduct periodic meetings with a main presentation followed by some lightning talks (the latter were a complete success at ILC, and we’d like to try and see how they work for us), with as much discussion interleaved as we see fit. We’ll have some refreshments available and, since we’re meeting in the very center of the old city, visits to pubs or a restaurant for dinner and further socializing are to be expected.

As i said, we’re expecting much discussion about Scheme and Lisp, but we’re not ruling out by any means other fine languages. For instance, the talk for the inaugural session (scheduled June 10th17th, 7:30 pm) is entitled The implementation of FUEL, Factor’s Ultimate Emacs Library, and it will include a short introduction to Factor (yes, i am the victim speaker). Jos will come next, the same day, with a lightning talk about PLT Redex. We have free slots for more lighting talks: you are invited not only to come, but to give one if you’re so inclined. This being our first meeting, there will be also some time for logistics and organisation.

So, if you’re near here by then, by all means, come in and join the fun:

Calle del Pi 3 Principal Interior (first floor)
Barcelona

Not really needed, but if you’re thinking about coming, sending me a mail beforehand will help us to be sure that we’ve got enough food and drinks.

We’re looking forward to getting FLIB started, and we’re sure that at least grix more fringers are coming! Don’t miss it!

Geiser

Update: Since a few months ago, Geiser has its own home in the interwebs.

I hope you’ll pardon a shameless plug of one of my latest hacks, Geiser, a new Emacs-Scheme interaction mode.

After having lots of fun implementing Fuel, i was left with a lot of Elisp code that, i realized, could be easily reused for languages other than Factor. I also decided that it was high time to stop whining about Scheme environments not being dynamic enough and do something about it. As they say, talk is cheap.

Thusly, Geiser was born, and today it came of 0.0.1 0.0.2 age, as per the git tag in its repository.

If you know Slime or Fuel, you know what Geiser aims at: a pleasant, live interaction with Scheme without leaving Emacs. This first release is by no means there yet, but you’ll already find some joy using it: module-aware and incremental evaluation, jumping to definitions, dynamic symbol completion and automatic signature display in the echo area are the highlights.

Currently, Geiser supports two Scheme implementations: Guile and PLT. Yeah, i like both (and several others). It’s been really fun discovering how to tweak them to obtain the metadata i wanted, and their developers and users have been helpful, kind and patient to no end. A big thanks to them (you know who you are), and my promise that i’ll keep nagging.

Both Guile and PLT have given me many pleasant surprises. Guile is the most common-lispy Scheme around, and the recent hard work and improvements by the likes of Andy Wingo is making much of the criticism it memetically receives just moot. And PLT is by no means the rigid system i thought it was, while retaining all the great features i knew it had. Try any of them, with or without Geiser: they’re real fun.

Back to Geiser, this being an alpha release, there’s no screencasts or real documentation… the code just escaped leaving a blood trail, you know. Maybe one day it’ll have a webpage, a mailing list and even users. In the meantime, if you’re brave enough, the README will hopefully do; and, of course, the code:

  git clone git://gitorious.org/geiser/mainline.git

(If you’re not brave enough, but curious, the code is browsable here.)

Needless to say, all kinds of comments, criticisms and laundry lists are welcome and, actually, encouraged.

Happy scheming!

From my cold, dead hands

Back at the PLT Scheme blog, Matthias Felleisen has just posted an entry on the rationale behind the limitations imposed to DrScheme’s REPL, namely, the inability to evaluate individual expressions. If you’ve read this blog before, you won’t be surprised to know that, as much as i respect Matthias, i totally disagree with his rationale. Well, not totally: let me explain. Matthias’ point is that allowing on the go redefinitions often leads to inconsistent states of the running image, which in turn leads to confusion and much head scratching, even when that head belongs to, say, Dan Friedman (let alone Matthias’ students’ heads, or mine, for that matter). Therefore, we’d better ban the offending feature from the development environment, right?

I’m willing to concede that evaluation of individual expressions is as dangerous as Matthias’ experience suggests, or even demonstrates. But i strongly take issue with his conclusion. Incremental development might be dangerous, but it’s also a powerful tool, and incredibly fun (all three come together most of the time, don’t they?) as any Common Lisp, Factor, Smaltalk, Scheme48 or Guile (to mention a few) hacker will tell you. As yours truly has been telling you since this blog started. Or PLT hackers, for that matter: Matthias’ post originates in this interesting email exchange, with people giving good reasons on behalf of an incremental REPL (think, for instance, of manipulating large chunks of intermediate data). I want this power. I want the fun. I’m willing to take the risk.

In my opinion, this is a good illustration of one of the points that Gerry Sussman was making in his talk on robust systems at ILC. For instance, injudicious use of generic functions will quickly paint you into a corner; but, at the same time, this sharp knife will cut for you beauties such as the scmutils library used in SICM, or help you in taming evolving requirements. Should we be deprived of generics for our own good?

Now, don’t take me wrong. I immensely appreciate the warnings and advice of people who, as Matthias, have much more experience and knowledge under their belts than i’ll ever have. I’m willing (eager, even) to learn from them. But i don’t understand why warnings should become prohibitions. I’m not dissing the work of Robby implementing DrScheme’s ‘restart-the-world’ functionality: it’s great to have it there (in fact, having it there as a safety net is yet another reason for allowing the incremental REPL variety). And i would have no objection if not having an incremental REPL were due to technical implementation details. What i fail to see is why undercutting the functionality of my tools in the name of ‘my own good’ is the right thing to do. I hated it when my parents did that to me ;-).

Sussmaniana

I’m back from ILC 09, slowly digesting all i lived there. This was my first Lisp conference and my first visit to MIT, a place marked with red big letters in the atlas of my private mythology. And it wasn’t only about places: suddenly realizing that you’re sitting next to Richard Greenblatt, or enjoying Gerry Sussman’s talks in the flesh, was quite an experience, with an almost eerie feeling attached to it.

There’s a problem in meeting your myths: real life is almost never up to the task of meeting one’s idealizations. But this was not the case; i enjoyed every minute, not a tinge of disappointment to be felt. Lisp has been around for quite a while and its history is an important part of computer science’s history. That history comes life in the ILC, and you get a chance to share with the people that were there back in the day, the people you read about in books… i’ve wished many times i was there, and these days in Cambridge i’ve been as close to those halcyon days as i can expect to ever be. Living history, what a feeling.

Had i to single out just one speaker, that’d have to be Gerry Sussman. I just kept finding myself resonating in a deep way with his thoughts. For instance, during the panel on the future of Lisp, conversation revolved around how to keep the language popular and apt for commercial applications. Gerry stepped out to point that that was all very well, but that we shouldn’t forget that one of the key aspects of programming languages is to what extent they allow us to extend our problem-solving abilities by providing new ways of expressing and talking about problems (as Dijkstra once said: Lisp has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts). Dead on, if you ask me, although unfortunately nobody seemed to have anything else to add, and the debate returned to the far less interesting popularity issues (well, yes, the conference wasn’t perfect after all).

Next day we were in a kind of tongue-in-check debate provocatively entitled Are macros a menace?. Richard Gabriel was on the wrong side, and arguing along the lines that macros were akin to language design and that he’d rather not suffer the consequences of letting your average software engineer undertake such a complex task. Gerry’s intervention at this point made me nod again: if we cannot trust our software enginneers to proficiently use the tools of our trade, there must be something wrong in the way we educate them; only those able to judiciously use them should get a diploma, to begin with. That’s exactly how i felt during my period as a CS teacher, as i tried, rather clumsily, to explain in one of my first rants in this blog. It feels great to be in such a good company.

Then there was this unplanned mini-talk on why MIT has replaced the SICP-based introduction to programming with something about robots using Python. You can read a nice synopsis of the reasons Sussman gave in Andy’s blog (together with summaries and comments of many other talks). It was nice in a kind of sad way: at the end, while answering a question, Gerry mentioned that this new computing world was not his, and it wasn’t one that he liked [0]. ‘But’, he said, ‘that’s because we’re old farts’. Although i’m younger than him, i like much better the kind of world that gave rise to SICP, and spent the rest of the evening feeling like a sad, old fart. I must say, however, that the ideas SICP grew upon, that simple world where you built up complexity out of small pieces and system that you understood completely, have much to offer to new generations. We should not dismiss them in the name of modernity.

Besides those cameos, Gerry had two official appearances in the regular conference programme, namely, one invited speech on robust systems design and a lightning talk. (The latter were 5-minutes long presentations–with Guy Steele and his chronometer strictly ensuring that that was actually the case–followed by a 2 minutes Q&A part, while the next speaker was setting up her laptop. This formula worked great most of the time, forcing the presenters to squeeze their brains in order to capture as much content, sense and, in most cases, fun as possible in such a short time. We had a living confirmation that working under severe constraints is a great creativity boost.)

In the invited speech, we had the opportunity of hearing more about Gerry’s ideas on what makes a system robust. I say ‘more’ because he made public a paper on the subject some time ago, and, actually, his ideas on these issues can be traced back to, for instance, the SICP video lectures, where he already exposes the general strategy to tackle the problem: in order to make a system robust, you don’t want to solve a strict, narrowly specified problem, but a whole family of them (or, if you have a very crisp specification, a class of problems in its neighbourhood). That way, when the problem to be solved varies in small ways, your whole solution accommodates to the new situation by small variations. The solution is not brittle. To attain such a flexible behaviour, we need to build our system upon components that are lenient on the inputs they accept and as sharp as possible in the outputs they produce. Ways to attain the above goals include metalinguistic abstraction (creating a language tailored to the problem domain), use of generic interfaces, degeneracy or non-deterministic search in the solution space.

Generic functions were nicely demonstrated with examples from the library used in SICM (and the delicious Functional Differential Geometry). We got the warning that using generics this way is dangerous. But they’re powerful, and one needs powerful methods to solve challenging problems; one needs to know what one’s doing, but that’s part of what we’re supposed to be good at, right? Sussman kept smiling and saying ‘beware, this is a very dangerous thing to do’. There was also an almost off-the-record comment to the effect that one of the missed opportunities in Lisp’s design was its not defining all its primitive forms as generics (converting it definitely in the most dangerous language around).

Degeneracy (using completely different ways for computing the same result) was illustrated, as much of Sussman’s thinking on robust programming, by examples drawn from biological systems (in this talk, it was frogs most of the time). You can find many other examples of this sort of parallelism between computing and biological systems in the paper linked above, a line of thought that i find mixes well with the handful of metaphors i use to reason about my job as a programmer. In particular, it sort of connects with my bias towards living systems such as Lisp or Smalltalk’s, where one is evolving more than designing and implementing the program; which in turn mixes well with the ideas of latent typing and late binding, present in those highly dynamic environments (Self, Factor, and APL (as demonstrated in this game of life video (unintended pun intended)) are in the same league). Much more than with beautiful but extremely rigid ones based on static typing, such as Haskell’s. (That’s why i keep coming back to Lisp after my Haskell excursions, or why i find R6RS so disappointing. Or, if you’ll pardon my keeping on mixing methaphors, why i prefer healing rather than practising autopsies.)

A constraint propagation systemAnother venue to flexibility mentioned in the talk are constraint propagation networks in which multiple sources contribute to defining the values of each state variable. You get that way the possibility of partially defined values, that can be nonetheless useful by themselves, depending on the computation you’re performing. Propagator networks also work as additive computation machines able to refine coarse inputs into correct solutions for problems specified as a set of constraints. One of Sussman’s students, Alexey Radu, is actively working on propagators, building on work inititated back in the day by Guy Steele. You can find an extensive report and nice, working Scheme code here.

Poincare sectionFinally, Gerry gave a lightning talk with yet another piece of food for thought. The rub of it was drawing our attention to the possibility of exploiting a posited parallelism between the theory and methods to solve differential equations on the one hand, and programs on the other. There’s a way of approaching solving a differential equation that is, if you will, algebraic in nature: one manipulates algebraically expressions to simplify and eventually obtain a closed form solution, or, if that’s not possible, creates numerical approximations to evolve the boundary conditions in the state space as a function of discrete time steps. You end up that way with something that works as a solution, but, most of the time, without a deep understanding of the traits that make it a solution: in the spirit of the robust design ideas sketched before, we should probably be asking for more qualitative information about how solutions behave as we change the boundary or initial conditions of our problem. As it happens, matematicians have a way of analyzing the behaviour of solutions to differential equations by studying their Poincaré maps and sections, which are views into the orbits followed by the solutions in their state space. Many times, you don’t have to solve exactly the differential equation to predict its qualitative behaviour (e.g., is it bounded? is it periodic?) in state space, and get insight on how it changes in presence of perturbations. The analogy with computing processes is clear: most of the time, we narrow our efforts in finding, so to speak, algebraic solutions to our computing problems: the program-as-text is the analog to the process of finding an exact formula for the solution of a differential equation. What we’re missing, according to Sussman, is a way or reasoning about the qualitative features of our programs à la Poincaré, i.e., a way or reasoning about programs in the state space of their outputs, beyond the mechanistic algorithmical details. Gerry admitted that he didn’t know how or in what form this kind of reasoning would proceed, only that his hunch is that it must possible, and exhorted us to find the way.

ILC09 would have been worth it only for those talks, but there was much more: don’t miss the next one!


[0] As rightly pointed out by Dan Weinreb in his comment below, Sussman endorses the changes in the new curriculum, though. His post on this issue is worth reading too.

Vintage CLOS

I’ve been playing with CLOS again these days (after finding it specially apt for one of my pet projects (more on that later)) and just stumbled upon this fun video by Daniel Bobrow, the chair of the committee that standardized CLOS in the late eighties and, to this day, research fellow in the mythic Xerox PARC.
Daniel gives a good introduction to the Common Lisp Object System, its design trade-offs, the main ideas behind it (generic functions, classes, reflection), and talks a bit about the standard body’s work (fun to see how, back in the day, using mailing lists for discussion was considered innovative).
Besides its vintage charm (don’t miss the Q&A session at the end), the talk works as a good little introduction to CLOS (despite occasional inexactitudes), which can be supplemented by this nice article by Bobrow, Gabriel and White.
Enjoy!

Follow

Get every new post delivered to your Inbox.

Join 27 other followers