On Lisp On Line and new PLAI

I’m sure i don’t need to comment or recommend Paul Graham’s On Lisp. Nor tell you that it’s available from the author’s site in PDF format. However, you may be (as i was) unaware that there’s an on line HTML version, complete with a search box and settable CSS. I also routinely install it in texinfo format, for easy reference inside Emacs.

On a related note, i’ve also recently updated my copy of Shriram Krishnamurthi‘s Programming Languages: Application and Interpretation, a wonderful book that Shriram uses in his Programming Languages course at Brown University. It covers almost every single interesting corner of PL-land, including interpreters, laziness, recursion, continuations, garbage collectors, type theory, declarative programming or macros. The book commences with an invitation:

I think the material in these pages is some of the most beautiful in all of human knowledge, and I hope any poverty of presentation here doesn’t detract from it. Enjoy!

Needless to say, poverty of presentation is hardly to be seen in this excellent work.

Happy reading.

Tags: , , , , , ,

The Art of Lisp & Writing

I wasn’t aware of The Art of Lisp & Writing, by Richard P. Gabriel, being online. I like this essay so much that it justifies its own entry. As i mentioned in The Joy of REPL, this was written as the foreword to Successful Lisp, a pretty nice book on Common Lisp, but Richard’s thoughts apply to any Lisp and many other programming languages.

While i’m at it, there are many other essays by RPG worth reading, including his classic series Worse is Better, his (and Guy Steele’s) history of The Evolution of Lisp (PDF), or the little jewel Why of Y, for those of you really wanting to understand what recursion is about.

Richard was also one of the designers of CLOS, and has a number of interesting papers on the CLOS specification.

Tags: , ,

Programmers love writing (and mocking) tests

Aggressive unit testing (also known, by those willing to earn some easy money, by the buzzword test-driven development) is about the only practice from XP that i embraced happily in my days as a dot com employee (it was around 2000, and i was working for one of those companies that got lots of funding from avid but hopelessly candid investors to do… well, to do basically nothing).

Those were my long gone Java days, and i got test infected all the way down. That writing tests is a good practice is hardly news for any decent programmer, but what i specially liked about putting the stress on writing lots of tests was discovering how conductive the practice is for continuous code rewriting (count that as the second, and last, extreme practice i like). I had yet to discover the real pleasures of bottom-up development in REPL-enabled languages, but even in Java my modus operandi consisted basically in starting with concrete code (sometimes even a mere implementation detail) and make the application grow up from there. Of course, some brainstorming and off-the-envelop diagramming was involved, but the real design, in my experience, only appears after fighting for a while with real code. The first shot is never the right one, nor the second, for that matter. The correct design surfaces gradually, and i know i’ve got it right when unexpected extensions to the initially planned functionality just fit in smoothly, as if they had been foresighted (as an aside, i feel the same about good maths: everything finds its place in a natural way). When you work like that, you’re always rewriting code, and having unit tests at hand provides a reassuring feeling of not being throwing the baby with the bath during your rewriting frenzies.

Of course, there’s also a buzzword-compliant name for such rewritings (refactoring), and you can expend a few bucks to read some trivialities about all that. (Mind you, the two books i just despised have been widely acclaimed, even by people whose opinion i respect, so maybe i’m being unfair here—in my defense, i must say i’ve read (and paid) both of them, so, at least, my opinion has cost me money.)

Sunit in SqueakAnyway, books or not, the idea behind the JUnit movement is quite simple: write tests for every bit of working code you have, or, if you’re to stand by the TDD tenets (which i tend not to do), for every bit of code you plan to write. As is often the case, the Java guys where not inventing something really new: their libraries are a rip-off of the framework proposed by Kent Beck for Smalltalk. Under the SUnit moniker, you’ll find it in every single Smalltalk implementation these days. A key ingredient to these frameworks’ success is, from my point of view, their simplicity: you have a base test class from which to inherit basic functionality, and extend it to provide testing methods. Languages with a minimum of reflection power will discover and invoke those methods for you. Add some form of test runner, and childish talk about an always green bar, and you’ve got it. The screenshot on the left shows the new SUnit Test Runner in one of my Squeak 3.9 images, but you’ll get a better glimpse of how writing unit tests in Squeak feels like by seeing this, or this, or even this video from Stéphane Ducasse‘s collection.

Of course, you don’t need to use an object-oriented language to have a unit testing framework. Functional languages like Lisp provide even simpler alternatives: you get rid of base classes, exchanging them by a set of testing procedures. The key feature is not a graphical test runner (which, as any graphical tool, gets in the way of unattended execution: think of running your tests suites as part of your daily build), but a simple, as in as minimal as possible, library providing the excuse to write your tests. Test frameworks are not rocket science, and you can buy one as cheap as it gets: for instance, in C, i’m fond of MinUnit, a mere three-liner:

/* file: minunit.h */
#define mu_assert(message, test) do { if (!(test)) return message;  \
                                    } while (0)

#define mu_run_test(test) do { char *message = test(); tests_run++; \
                               if (message) return message; } while (0)

extern int tests_run;

(Let me mention in passing, for all of you non-minimalistic C aficionados, the latest and greatest (?) in C unit testing: libtap) Add to this a couple of Emacs skeletons and an appropriate script and you’re well in your way towards automated unit testing. From here, you can get fancier and add support for test suites, reporting in a variety of formats, and so on; but, in my experience, these facilities are, at best, trivial to implement and, at worst, of dubious utility. It’s the quality and exhaustiveness of the tests you write what matters.

Lisp languages have many frameworks available. The nice guys of the CL Gardeners project have compiled a commented list of unit testing libraries for Common Lisp. In Scheme you get (of course) as many testing frameworks as implementations. Peter Keller has written an R5RS compliant library that you can steal from Chicken. Noel Welsh’s SchemeUnit comes embedded into PLT, and the Emacs templates are already written for you (or, if you mileage varies and you’re fond of DrScheme, you can have a stylized version of the green bar too). Personally, i don’t use PLT, and find Peter’s implementation a bit cumbersome (meaning: too many features that i don’t use and clutter the interface). Thus, my recommendation goes to Neil van Dyke of quack‘s fame’s Testeez. Testeez is an R5RS (i.e., portable), lightweight framework that is as simple as possible. Actually, it’s simpler than possible, at least in my book. In my opinion, when a test succeeds it should write nothing to the standard (or error) output. Just like the old good unix tools do. I only want verbosity when things go awry; otherwise, i have better things to read (this kind of behaviour also makes writing automation and reporting scripts easier). So, as a matter of fact, I use a hacked version of Testeez which has customizable verbosity levels. It’s the same version that we use in Spells, and you can get it here. Also of interest are Per Bothner’s SRFI-64, A Scheme API for test suites and Sebastian Egner’s SRFI-78, Lightweight testing (both including reference implementations).

Lisp testing frameworks abound for a reason: they’re extremely useful, yet easy to implement. As a consequence, they’re good candidates as non-trivial learning projects. A nice example can be found in Peter Seibel’s Practical Common Lisp (your next book if you’re interested in Common Lisp), which introduces macro programming by implementing a decent testing library. In the Scheme camp, Noel discusses the ups and downs of DSL creation in an article describing, among other things, the SchemeUnit implementation. A worth reading, even for non-beginners.

Once you settle on a test framework and start writing unit tests, it’s only a question of (short) time before you’re faced with an interesting problem, namely, to really write unit tests. That is, you’re interested in testing your functions or classes in isolation, without relying on the correctness of other modules you’ve written. But of course, your code under test will use other modules, and you’ll have to write stubs: fake implementations of those external procedures that return pre-cooked results. In Lisp languages, which allow easy procedure re-definition, it’s usually easy to be done with that. People get fancier, though, specially in object-oriented, dynamic languages, by using mock objects. The subject has spawn its own literature and, although i tend to think they’re unduly complicating a simple problem, reading a bit about mockology may serve you to discover the kind of things that can be done when one has a reflective run-time available. Smalltalk is, again, a case in point, as Sean Mallory shows in his stunningly simple implementation of Mock Objects. Tim Mackinnon gets fancier with his SMock library, and has coauthored a very interesting article entitled Mock Roles, Not Objects, where mock objects are defined and refined:

a technique for identifying types in a system based on the roles that objects play. In [9] we introduced the concept of Mock Objects as a technique to support Test-Driven Development. We stated that it encouraged better structured tests and, more importantly, improved domain code by preserving encapsulation, reducing dependencies and clarifying the interactions between classes. [...] we have refined and adjusted the technique based on our experience since then. In particular, we now understand that the most important benefit of Mock Objects is what we originally called interface discovery [...]

An accompanying flash demo shows SMock in action inside Dolphin Smalltalk. The demo is very well done and i really recommend taking a look at it, not only to learn to use Mock Objects, but also as a new example of the kind of magic tricks allowed by Smalltalk environments. Albeit not as fanciful, Objective-C offers good reflective features, which are nicely exploited in OCMock, a library which, besides taking advantage of Objective-C’s dynamic nature, makes use of the trampoline pattern (close to the heart of every compiler implementer) “so that you can define expectations and stubs using the same syntax that you use to call methods”. Again, a good chance to learn new, powerful dynamic programming techniques.

As you can see, writing tests can be, a bit unexpectedly, actually fun.

Tags: , , , , , ,

Persistent Joy

In the comments section of The Joy of REPL, a reader is posing an interesting question: how do i make my joy persistent? Or, in her words,

Dumb question – you are happily programming in the environment, and the lights go out. Have you lost your state?
How do you save “source” code? I’m interested from the angle of irb, as I like ruby. I still think in the mode of writing the source in an editor, checking it in, etc.
I can’t seem to imagine this environment in terms of day to day work, esp with a development group.

Managing persistence depends largely on your development environment. But of course, the primary method is the traditional one: you write files. You don’t need to literally type your code at the interpreter’s prompt. Any decent editor will let you send to the interpreter expressions written (and, eventually, saved) in any editing buffer. Emacs excels in this aspect, specially if you’re on Lisp and use Slime (or its cousin slime48, which works on scheme48). You can see it in action in Marco Baringer’s excellent tutorial video (bittorrent available here). The important thing to keep in mind is that you need the ability to evaluate individual expressions (as opposed to loading files as a whole), and this is made possible by the joint work of your language’s runtime support and your editor. I’m not a Ruby user, but i bet Emacs or vim, among others, give you similar facilities. That said, i would be surprised if they are as impressive as Slime’s. Because Slime is cheating: it interacts with a programming system (namely, Common Lisps’) that does its very best to allow an incremental, organic development style. How so?

Well, as soon as you go beyond writing little toy snippets and into serious (as in bigger) programs, you’ll need some kind of module system, in the sense of a way of partitioning your namespace to avoid name collisions. Every language out there provides such a mechanism in one way or the other (and Scheme famously provides as many ways as there are implementations; more on this below). Therefore, to keep enjoying our interactive way of life, we need that the interpreter and the editor cooperate to evaluate our code in the correct namespace. Common Lisp’s module system is based on packages. Each symbol known to the system belongs to one of them, and it is customary to begin your files with a form that informs whoever is interested to what package the following code belongs into… and the editor/interpreter team are definitely interested: expressions sent from a buffer to the REPL are evaluated in the correct context. Again, i don’t know whether Ruby or Python offer this synergistic collaboration, but i know that you definitely need it to attain the Joy of REPL.

Common Lisp is not unique in this regard. In the Scheme world, scheme48′s module system was also designed with interactive, incremental development in mind, and taking advantage of it in Emacs required an, in a sense, almost straightforward (but, by all means, worthy) effort (thanks Taylor and Jorgen). As an aside, this is what makes s48 my preferred scheme and keeps me away from otherwise remarkable systems like PLT. (And this is why the current R6RS standard module system proposal is a show-stopper: if you happen to have a friend in the committee, please write him and include a link to Taylor Campbell’s alternative proposal and its accompanying rationale.)

Thus, when lights come back, you recover your previous environment by reloading your files. Good module systems provide means to streamline this operation, typically (but not always) by storing the package definitions in separate files. But this is still a nuisance, isn’t it? I must wait to all my files being reloaded and maybe byte-compiled… Don’t despair, there are better ways. Most CL implementations and several Schemes (MIT/GNU Scheme and, again, scheme48 come to mind) allow you to save your complete state, at any time, in what is called and image file. This image contains a binary snapshot of the interpreter’s state, and you can reload it at any later time. Being a binary representation, it will come to life blazingly fast. Besides Lisp, Smalltalk is the paradigmatic (and possibly pioneer, but i’m not 100% sure on this) image-based language: for instance, in Squeak, the only way to launch the environment is loading a previously saved image, which contains detailed information of your previous state (including the graphical environment). In this sense (and many others), Smalltalk is a dynamic programmer’s dream come true.

Image files make things even better, but not perfect: you still need to save your state every now and then. In an ideal world, persistence should be automatic, managed behind the scenes by the system, even by the operating system. Just like the garbage collector we have come to know and love in our dynamic environments manages memory for us. This nirvana is called Orthogonal Persistence, but unfortunately, we’re not there yet. I first heard of OP from the guys of the Tunes project, where François-René Bân Rideau and friends have envisioned what i view as the ideal computing environment. Unfortunately, up to this day it remains in the Platonic realm of the ideals (but this doesn’t prevent their having one of the best online knowledge bases on computer science). Another interesting project in this regard, with actually running code that may interest the pythonistas among you, is Unununium, an operating system built around the idea of orthogonal persistence. Finally, in this context it is also worth mentioning again Alan Kay’s brainchild Squeak, which provides an environment that, without being an entire OS, in many ways isolates you into a wonderland of its own.

Tags: , , , , , ,

The Joy of REPL

Back in the old days i was a macho C++ programmer, one of those sneering at Java or any other language but C, willing to manage my memory and pointers and mystified by the complexity of the template syntax (it was difficult and cumbersome, ergo it had to be good). Everyone has a past.

Things began to change when i decided to add Guile extensibility to GNU MDK. I was using the project as an excuse to learn everything one has to learn to write free software, from parsers with flex and bison, to documentation generators like texinfo and doxygen or localisation via gettext. Next thing was scriptability, and those days Scheme was still the way to extend your GNU programs (last time i checked, GNOME was full of XML, a windows-like registry and, oh my, C#… no Scheme (or good taste) to be seen).

So, when i first encountered Scheme i was high on static type checking, object oriented programming in its narrow C++ flavour, and all that jazz. I didn’t understand immediately what was so cool about having an interpreter, and defining functions without the compiler checking their signature at every single call made me feel uneasy. I was told that i still had strong type checking in Lisp, but that it is deferred to run time, instead of at the apparently safer compile phase. I didn’t get it. Thanks god, SICP was so fun that i kept on learning, and i kept wondering for a while what was so great about interpreters and dynamic typing.

Problem was, i was writing C programs in Scheme. In a compiled language (a la C) and, to some degree, in any statically typed one, your code is dead. You write pages and pages of inert code. You compile it. Still dead. Only when you launch that binary does it come to life, only that it lives elsewhere, beyond your reach. Admittedly, i’m exaggerating: you can reach it in a convoluted way via a debugger. But still. A debugger is an awkward beast, and it will only work with the whole lot: all your program compiled, linked and whatnot.

Enter a dynamic language. Enter its REPL. When you have a, say, Lisp interpreter at your disposal you don’t write your code first and load it later (that’s what i was doing at first). You enter your code piecewise, function by function, variable by variable at that innocent looking prompt. You develop incrementally, and, at every single moment, your objects and functions are alive: you can access them, inspect them, modify them. Your code becomes an organic creature, plastic. Its almost not programming, but experimenting.

Maybe you’re raising a skeptical eyebrow. Maybe you have one of those modern visual-something debugger that lets you modify your compiled code on the fly and continue running your code using the new definitions and you think that’s what i’m talking about… Well, no, sorry, that’s only part of what i’m talking about. To begin with, you continue executing your program. I can do whatever i want. But that’s not all. We are talking about a dynamically typed language. That means that me and my little REPL have much more leeway to modify the living code, and thus much more margin to grow up and evolve the code.

At the end of the day, dynamically typed languages give me freedom. Programming is a creative process and greatly benefits from that freedom. At first, abandoning the safety net provided by static typing was a little bit scary, but as i grew up as a programmer i felt more and more confident, and gradually the initially uneasy feeling morphed into joy. The joy of REPL.

Richard P. Gabriel has made a far better job in beautifully conveying what i’m trying to express in his excellent introduction to David Lamkins’ book Successful Lisp, entitled The Art of Lisp and Writing. Unfortunately, i haven’t found it online — you can read the first few pages in amazon.com’s “look inside this book” section for this book. And also in his essay Do Programmers Need Seat Belts?. Paul Graham has famously argued in favour of bottom-up development in many of his essays, and specially in Programming Bottom-Up:

It’s worth emphasizing that bottom-up design doesn’t mean just writing the same program in a different order. When you work bottom-up, you usually end up with a different program. Instead of a single, monolithic program, you will get a larger language with more abstract operators, and a smaller program written in it. Instead of a lintel, you’ll get an arch.

Finally, please note that i’m well aware that the static vs. dynamic typing debate is still open, and that decent type systems like those in Haskell and ML have, arguably, much to offer in the way to solid software engineering. Type theory has also a powerful and beautiful mathematical foundation. The above is just my gut feeling and current position on these issues, and i don’t pretend to have backed it with solid technical argumentation. Nor was it my goal. I’m more interested here in programming as a creative activity than as engineering.

Tags: , , ,

Sketchy LISP Vol. II

Sketchy LISP Vol. II – Reference is available. From the introduction:

Sketchy is an interpreter for a purely applicative dialect of Scheme. It may be considered an implementation of pure LISP plus global definitions, first-class continuations and input/output functions. Like its first volume, this part focuses on the functional aspects of the Scheme language. While the first volume provides a step-by-step introduction to Scheme in general, this part describes the Sketchy subset as a formal system for re-writing terms. It introduces semi-formal definitions for Scheme data, programs, a small set of primitive functions, and a set of rules for reducing purely applicative expressions to normal forms.

As you can see, this second volume contains more advanced stuff, and gives you yet another opportunity of digging deeper into abstract thinking. I just ordered my copy, but the full text is available online if you’d rather save twelve bucks.

Tags: , , ,

Posted in Books, Lisp, Scheme. 1 Comment »

Schemes i have seen

Recently, there have been several releases of non-mainstream Scheme systems (for whatever value mainstream has in the context of Scheme implementations). All of them have a long history and are developed by people high up in the community’s iconography. They are also very portable and, hence, you’ll probably be able to play with them, learn and take advantage of their unique features (both as a programmer and as an implementor: the sources are there in the open too).

Gambit LogoThe Gambit Scheme System, by Marc Feeley, includes an interpreter and a compiler using C as intermediate language. Among many interesting features (and the coolest logo), it has the ability of producing standalone executables, and often requested feature. (I’m personally more interested in how good are its interpreter and module system at providing a truly dynamic development environment, but that’s another story.) Gambit features an extremely efficient thread system, capable of supporting millions (sic) of concurrent processes. Another nifty feature are readtables, akin to Common Lisp’s reader macros, making it an excellent choice if you’re into the parsing business or simply want to have a little extra fun. My only nit (and the reason i’m not trying to include Gambit support into Spells) is that the support for SRFIs is relatively weak: are you looking for an interesting and fairly doable hacking project?.

Update: As it comes out, it’s better than i thought. One of the Gambit developers has pointed me to this quick and dirty, yet convenient hack that, when run in Gambit’s installation directory, will automagically install the indispensable SRFI-1.

Stklos
Stklos also released a new version this January. After being dormant for some time, development seems to progress at full steam lately. Stklos is the successor of Stk, initiated by Erick Gallesio in the early 90s, and was pointed out by RMS as “the Tcl substitute” in his famous Why you should not use Tcl flamewar. Unfortunately, the FSF later chose Guile as the Tcl substitute, instead of this very interesting implementation, whose current incarnation includes some unique offers. To begin with, Stklos features an efficient and powerful object system based on CLOS, another of my CL-envies. Also of note is that it comes with Gtk+ integration: follow the link to see some nice screenshots with source code. Personally, and with an eye on the s48-worlds project, i also find interesting its ability to install extensions downloaded from public repositories. Finally, its SRFI support is really excellent. On the downside, well, its logo is not so cool, is it?

The Larceny Project has just announced their Operation Drop-Kick release of Petit Larceny, a portable Scheme to C compiler which now supports OS X, GNU/Linux, Windows and Sparc/Solaris. The Larceny project was initiated by Will Clinger back in 1991, and has a strong focus on garbage collection and compiler optimisations. I have not really used it, but my uninformed opinion is that Larceny will be mainly useful to those of you doing research (or with a strong interest in) those areas.

Tags: ,

Counting by lambdas

One of the nice things about functional languages is that you’re closer to the mathematical underpinnings of computer programming. Now, maybe you don’t like maths, but then you’ll have to learn to live with a necessary evil, for trying to be serious about programming without taking the plunge into abstract thinking is like wanting to be an astronomer without writing down a single equation. In my experience, both in the industry and as a CS teacher, the ability of abstraction is the most useful and, unfortunately, hardest to find quality when it comes to programming.

Structure and interpretation of computer programs I entered the functional programming world some years ago through a golden gate: SICP. It was an amazing journey full of magical moments: recursion, higher-order functions, generics, lazy evaluation, metacircular interpreters, compilers… a journey i have reedited many times afterwards… i still watch a SICP lecture now and then instead of taking my TV to the repair shop.

Among those moments, one that i specially recall is my first encounter with Church Numerals. I remember staring blankly for quite a while to the code in exercise 2.6:

(define zero (lambda (f) (lambda (x) x)))

(define (add-1 n)
(lambda (f) (lambda (x) (f ((n f) x)))))

And then something clicked in, and i’ve been amazed ever since. Of course, Church numerals are just the tip of the iceberg: from there i went to learn Lambda Calculus, and realized that there were powerful and deep principles at play underneath that fun hacking hobby of mine.

Things can get pretty hairy from here. If you think that the above lambda tricks are old hat and not worth such a fuss, take a look at the impressive fireworks of Oleg’s with Lambda Calculus, Scheme and Haskell, including negative numbers, substraction and division: i’m sure you will find them much more fun.

But if you’d rather follow a gentler path through this magic, the better introduction i’ve read is Chris Hankin’s An Introduction to Lambda Calculi for Computer Scientists. Or, if you prefer online stuff, this javascript-enabled lambda tutorial. From there, you can go to… well, to many places. For instance, McCarthy’s original paper on Lisp, or any of the Lambda Papers by Steele and Sussman. And of course, if you want to get to the very marrow of it, Barendregt’s Shorter introduction to the lambda calculus (free PDF), and the definitive The Lambda Calculus, its syntax and semantics will keep you busy for quite a while. Enjoy!

Tags: , , , , , , ,

Dynamic OS X Programming

I am very fond of Mac OS X’s graphical interface. As a GNU/Linux or FreeBSD user, i despise GUIs and usually work in a mostly text-based ambient powered by Emacs running inside a urxvt terminal. Things change dramatically when i enter OS X. There i discovered that it is not that i dislike GUIs, but that i dislike ugly GUIs (although the jury is still out on where i’m more productive).

Thus, OS X is the only environment where i feel like programming a GUI. From a programmer’s viewpoint things also start out better: one can go native and avoid C and C++. Cocoa’s native language is Objective-C, which happens to be a dynamic language. That is, one has a runtime where possibly dynamic objects live, good (but not great) reflective capabilities and a meta layer that let’s you, for instance, access class and metaclass objects. And all that at runtime. That allows (besides the fun of programming) funny tricks, like the ability to interact with running applications from the outside, in what constitutes an incredibly flexible plug-in system.

Probably the description above rang a bell: Smalltalk. Objective-C is a scaled down Smalltalk in many ways. The good news is that you can have the real thing: F-script is a Smalltalk dialect specifically designed to script Cocoa applications. It let’s you inspect any running Cocoa application, and manipulate the objects it contains in real time, so to speak. F-Script Object browser As any decent Smalltalk, it comes with a powerful object browser, which you can admire in all its beauty below. Moreover, F-script can be embedded in your Cocoa application, acting as an excellent extension language, way more cool than Apple’s default scripting language. A beautiful application taking full advantage of these abilities is ObjectiveCLIPS, a powerful rule-based development environment for Cocoa.

But still, to my knowledge, you cannot program a whole application in F-Script and, besides, Smalltalk is all very well, but it’s not Lisp. Support for Lisp in the Cocoa world is not as good, but shows promise. First, there’s OpenMCL, an excellent Common Lisp implementation for the Mac and Unix systems that comes with a Cocoa bridge. It includes a demo Cocoa IDE to get you started, and people have gone as far as creating games using it.

Until recently, poor Schemers were left in the cold. Enter Chicken and its Objective-C egg. Support is still rudimentary, but there’s already a nice tutorial on writing a currency converter, the default Cocoa Hello World app.

Probably, it’s still too hard to write a well-integrated Cocoa application using Lisp, but i think it’s just a matter of time. Happy hacking!

Tags: , , , ,

Down to the metal: my other CAR is a CDR

Surprisingly, one of the most frequent complaints of people new to Lisp is that CAR and CDR are weird names. Well, to begin with, FIRST and REST may sound as weird to a non-english speaker, and are longer. In the second place, you can always DEFINE them away to your heart’s content (although, admittedly, you cannot (easily) define them away in other people’s code). Besides, it’s not all clear whether CAR and CDR deal really with lists (and you’d call them first/rest or head/tile) or pairs (first/second?), as this discussion in comp.lang.scheme shows.

Sometimes, CAR and CDR are sighed upon as minor evil and a concession to tradition and old lore. That misses an important point: composability. Try to write the equivalent of CAADR using your new definitions: maybe you’ll change your mind about their inconvenience. All in all, i think that if we hadn’t CAR and CDR, we would have invented them.

 Acis History 704-LlnlThat said, of course those funky names have historical roots. As you probably know, they stand for Contents of the Address/Decrement Register. The first Lisp was implemented in a room-sized IBM 740. You have a snapshot of this beast below (i wanted to show it to you, hence this entry): click on it for a big nice image, or follow this link (posted by Anton van Straaten in comp.lang.scheme) for a lot more. You can also browse a photographed copy of (part of) the IBM 704 Users Manual. For a machine of that size, the manual is amazingly slim. Those were simpler times!. In page 8 you’ll find the origin of our funny names:

Type A instructions use two 15-bit fields (decrement and address) containing numbers in the octal range 00000 to 77777. [...] Bits 21-35 are called the address part of an instruction [...] Bits 3-17 are called the decrement part of an instruction because they may represent a number substracted from the contents of an index register.

There you’ve got it. As kindly pointed out by Paul in the comments below, more information on the 704 is available here, and you can learn more about the first Lisp in this site.

Posted in Lisp, Scheme. 2 Comments »
Follow

Get every new post delivered to your Inbox.

Join 31 other followers