Conduits to better Common Lisp packages

Common Lisp’s package system provides a simple way to avoid name clashes by means of separate namespace. No more, no less. Albeit having their shadowy corners, as illustrated in this excellent tutorial by E. Gat, packages are very easy to use and, most of the time, get the work done. Basically, think of a package as a map from strings to symbols, with associated lookup functions, and bindings resolved at read time.

But, simple as they are, packages in CL are second-class citizens: you cannot modify them at runtime, add new symbols, create new packages and pass them around as first-class values. For instance, Python modules are dictionary objects (and, as such directly manipulable as, say message receivers or function arguments). Languages in the ML family provide powerful module manipulation facilities, including functors, that is, parametric modules that take modules as arguments and generate new ones (see, for instance, a sample in Alice ML). This kind of trick is not limited to ML: PLT’s units offer the same functionality for Scheme (although i find them cumbersome).

Back to CL, i recently discovered Tim Bradshaw’s Conduits, a nice library adding very interesting functionality to the vanilla package system. For instance, you can define variants of pre-existing modules, or extend a given package including only a subset of the symbols defined by its parent. You can also clone existing packages. Tim’s hacks also include a hierarchical package system for CMUCL that mimics Allegro CL’s one.

Tim’s pages on Lisp and its obscurities are also worth a visit.

Tags: ,

The Lisp Dictionary

William Bland announced yesterday The Lisp Dictionary, a Lisp-centric document searching facility where he has indexed the Common Lisp HyperSpec, PCL, Successful Lisp, and SBCL‘s documentation strings, plus example code taken from PAIP and PCL. All mixed together in a simple and elegant interface.

(As an aside, William is the author of a very fun Linux module, Schemix, which embeds a Scheme interpreter right into the kernel, although this days he recommends Movitz, a Common Lisp x86 development platform directly “on the metal”.)

If you’re an Emacs user, here’s an elisp function (courtesy of a c.l.l poster named Nick), to search the Lisp Dictionary without leaving your editor:

(defun lispdoc ()
  "searches for SYMBOL, which is by default the symbol
currently under the curser"
  (let* ((word-at-point (word-at-point))
	 (symbol-at-point (symbol-at-point))
	 (default (symbol-name symbol-at-point))
	 (inp (read-from-minibuffer
	       (if (or word-at-point symbol-at-point)
		   (concat "Symbol (default " default "): ")
		   "Symbol (no default): "))))
    (if (and (string= inp "")
             (not word-at-point) 
             (not symbol-at-point))
	     (message "you didn't enter a symbol!")
	 (let ((search-type 
			  "full-text (f) or basic (b) search (default b)? ")))
	  (browse-url (concat ""
			      (if (string= inp "")
			      (if (string-equal search-type "f")

Tags: , , ,

On Lisp On Line and new PLAI

I’m sure i don’t need to comment or recommend Paul Graham’s On Lisp. Nor tell you that it’s available from the author’s site in PDF format. However, you may be (as i was) unaware that there’s an on line HTML version, complete with a search box and settable CSS. I also routinely install it in texinfo format, for easy reference inside Emacs.

On a related note, i’ve also recently updated my copy of Shriram Krishnamurthi‘s Programming Languages: Application and Interpretation, a wonderful book that Shriram uses in his Programming Languages course at Brown University. It covers almost every single interesting corner of PL-land, including interpreters, laziness, recursion, continuations, garbage collectors, type theory, declarative programming or macros. The book commences with an invitation:

I think the material in these pages is some of the most beautiful in all of human knowledge, and I hope any poverty of presentation here doesn’t detract from it. Enjoy!

Needless to say, poverty of presentation is hardly to be seen in this excellent work.

Happy reading.

Tags: , , , , , ,

The Art of Lisp & Writing

I wasn’t aware of The Art of Lisp & Writing, by Richard P. Gabriel, being online. I like this essay so much that it justifies its own entry. As i mentioned in The Joy of REPL, this was written as the foreword to Successful Lisp, a pretty nice book on Common Lisp, but Richard’s thoughts apply to any Lisp and many other programming languages.

While i’m at it, there are many other essays by RPG worth reading, including his classic series Worse is Better, his (and Guy Steele’s) history of The Evolution of Lisp (PDF), or the little jewel Why of Y, for those of you really wanting to understand what recursion is about.

Richard was also one of the designers of CLOS, and has a number of interesting papers on the CLOS specification.

Tags: , ,

Beyond mainstream object-oriented programming


After a few scheming years, i had come to view objects as little more than poor-man closures. Rolling a simple (or not so simple) object system in scheme is almost a textbook exercise. Once you’ve got statically scoped, first-order procedures, you don’t need no built-in objects. That said, it is not that object-oriented programming is not useful; at least in my case, i find myself often implementing applications in terms of a collection of procedures acting on requisite data structures. But, if we restrict ourselves to single-dispatch object oriented languages, i saw little reason to use any of them instead of my beloved Scheme.

Things started to change recently due to my discovering the pleasures of Smalltalk. First and foremost, it offers a truly empowering integrated ambient to live and develop in. Second, if you’re going to use objects, using the simplest, cleanest syntax will not hurt. Add to that some reading on the beautiful design principles underlying Smalltalk, and one begins to wonder if closures aren’t, in fact, poor-man objects–or at least i do, whenever i fall in an object-oriented mood (i guess i’m yet not ready to reach satori).

But Scheme is not precisely an ugly or bad designed language, so i needed some other reason to switch language gears for my OO programming. I knew there’s more than encapsulation or subtype polymorphism in object-land from my readings on CLOS (the Common Lisp Object System), or on Haskell’s type classes (and its built-in parametric polymorphism), but i was after something retaining Smalltalk’s elegance. And then i remembered that, when i was a regular lurker in the Tunes project‘s mailing lists and IRC channel, a couple of smart guys were implementing an OO language whose syntax was smalltalkish. That language (which, if memory servers, started life with the fun name who me?) has evolved during the last few years into a quite usable programming environment named Slate, started by Lee Salzman and currently developed and maintained by Brian Rice.

I’ve been reading about Slate during the last few days, and decided to learn it. What motivated me was discovering how Slate goes beyond mainstream object-oriented programming by incorporating well-known (but hardly used) and really powerful paradigms. In short, Slate improves Smalltalk’s single-dispatch model by introducing and combining two apparently incompatible technologies: multiple dispatch and prototype-based programming. To understand the whys and hows of Slate, there’s hardly a better way than reading Lee Salzman’s Prototypes with Multiple Dispatch. The following discussion is, basically, an elaboration of Lee’s explanation on the limitations of mainstream OO languages, and how to avoid them with the aid of PMD.

(Note: click on the diagrams to enlarge them, or, if you prefer, grab a PDF of the whole set.)

Fishes and sharks

Click to enlargeLet’s start by showing why on earth would you need anything beyond Smalltalk’s object system (or any of its modern copycats). Consider a simple oceanographic ecosystem analyser, which deals with (aquatic) Animals, Fishes and Sharks. These are excellent candidates for class definitions, related by inheritance. Moreover, we are after modeling those beasts’ behaviours and, in particular, their reactions when they encounter each other: each time a Shark meets a Fish of other species, the Shark will swallow the other Fish, while when a Shark meets Shark they will fight. As a result of such fighting, Sharks get unhealthy, which regrettably complicates matters: wound sharks won’t try to eat other fishes, and will swim away other sharks instead of fighting them. The image on the left provides a sketchy representation of the code we need to model our zoo. Waters are quickly getting muddled implementation-wise.

On the one hand, subtype polymorphism based just on the object receiving the encounter message: we need, in addition, to take into account the argument’s concrete type to implement the desired behaviour. This is a well-known issue in single-dispatch languages, whose cure is, of course, going to multiple dispatching (see below). In particular, we want to avoid the need to modify existing classes whenever our hierarchy is extended.

On the second hand, varying state (exemplified here by the Shark’s isHealthy instance variable complicates the implementation logic. As we will see, prototype-based languages offer a way to factor out this additional complexity.

Beyond single-dispatch

The need to adjust behaviour on the basis of the type of both a message receiver and its arguments arises frequently in practice. So frequently, that a standard way of dealing with it has been christened as the Visitor design pattern. The technique, also known as double-dispatch, is well known: you can see, for instance, how it’s applied to arithmetic expressions in Smalltalk, or read about a generic implementation of multimethods in Python (which also includes a basically language-independent discussion on the issues at hand). If you happen to be a C++ programmer, you may be tempted to think that global functions and overloading solve the problem in that language. Well, think twice: a proper implementation of multiple dispatch in C++ needs of RTTI and templates, as shown in this article.

Click to enlargeCLOS and Dylan are two examples of languages solving the issue from the onset by including support for multi-methods. The idea is to separate methods from classes (which only contain data slots). As shown in the pseudo-code of the accompanying figure, methods are defined as independent functions with the same name, but differing in their arguments’ types (in CLOS, a set of such methods is called a generic function). When a generic function is called, the system selects the actual method to be invoked using the types of all the arguments used in the invocation. The encounter generic function in our running example provides a typical example, as shown in the figure on the right. The benefits of having multi-methods at our disposal are apparent: the code is simpler and, notably, adding new behaviours and classes to the system does not need modifications of existing code. For instance, we can introduce a Piranha, which eats unhealthy sharks instead of swimming away from them, by defining the requisite class and methods, without any modification whatsoever to the already defined ones.

On the downside, we have still to deal with the complications associated with internal state. Enter the magic world of prototype-based systems.

The ultimate dynamic

If you like dynamic languages, chances are you’ll find prototype-based system an almost perfect development environment. Prototype-based languages emerged as an evolution of Smalltalk with the invention of Self by David Ungar and Randall B. Smith during the late eighties. The key idea behind Self is noticing that, most of the time, class definitions needlessly coerce and complicate your object model.

A class definition becomes a contract to be satisfied by any instance, and it is all too easy to miss future or particular needs of your objects (class-based inheritance is just a partial solution to this problem, as shown, for instance, by the so-called fragile base class problem). But, if you look around you, objects change in internal behaviour and data content continously, and our attempts at distilling their Platonic nature are often in vain.

In prototype-based programming, instead of providing a plan for constructing objects, you simply clone existing instances and modify their behaviour by directly changing the new instance’s slots (which provide uniform access to methods and state). New clones contain a pointer to their parent, from which they inherit non-modified slots: there is no way to access state other than via messages sent to instances, which simplifies tackling with state.

Class-based languages oblige you to keep two relationships in mind to characterize object instances: the “is-a” relationship of the object with its class, and the “kind-of” relationship of that class with its parent. In self, inheritance (or behaviour delegation) is the only one needed. As you can see, Self is all about making working with objects as simple as possible. No wonder Ungar and Smith’s seminal paper was titled Self: The Power of Simplicity. Needless to say, a must read.

Click to enlargeThe figure on the left shows how our running example would look in selfish pseudo-code. As promised, state is no longer surfacing in our method implementation’s logic. Unfortunately, we have lost the benefits of multi-methods in the process. But fear not, for, as we will see, you can eat your cake and have it too. Instead of pseudo-code, you can use Self itself, provided you are the happy owner of a Mac or a Sun workstation. Or you can spend 20 fun minutes seeing the Self video, which features the graphical environment accompanying the system. Like Smalltalk, Self provides you with a computing environment where objects are created, by cloning, and interact with you. The system is as organic and incremental as one can possibly get.

Of course, you’re not limited to Self. For instance, Ken Dickey fleshed up Norman Adams’ saying that objects are poor man closure’s by offering a prototype-based object system in Scheme, and, more recently, Neil Van Dyke has released Protobj. And you have probably already used a very popular language in the family: Javascript. The list goes on, albeit, unfortunately, many of these languages lack either Self’s nice integrated environment, or a portable, up-to-date implementation. Slate to the rescue.

The best of both worlds

Prototyping and multiple dispatch are, at first sight, at odds. After all, method dispatching based on arguments’ type needs, well, a type for each argument, doesn’t it? As it happens, Lee Salzman and Brian Rice have envisioned a way of combining the power of both paradigms into Slate. In fact, proving how this is possible is the crux of Lee’s article. In addition, Slate aims at providing a complete development environment in the vein of Smalltalk or Self. Too good to be true? In future installments of this blog category, we’ll see how and why it’s true, but, if you cannot wait, just run-not-walk to Slate’s site. You’ll have a great time.

Tags: , , , , , ,

Programmers love writing (and mocking) tests

Aggressive unit testing (also known, by those willing to earn some easy money, by the buzzword test-driven development) is about the only practice from XP that i embraced happily in my days as a dot com employee (it was around 2000, and i was working for one of those companies that got lots of funding from avid but hopelessly candid investors to do… well, to do basically nothing).

Those were my long gone Java days, and i got test infected all the way down. That writing tests is a good practice is hardly news for any decent programmer, but what i specially liked about putting the stress on writing lots of tests was discovering how conductive the practice is for continuous code rewriting (count that as the second, and last, extreme practice i like). I had yet to discover the real pleasures of bottom-up development in REPL-enabled languages, but even in Java my modus operandi consisted basically in starting with concrete code (sometimes even a mere implementation detail) and make the application grow up from there. Of course, some brainstorming and off-the-envelop diagramming was involved, but the real design, in my experience, only appears after fighting for a while with real code. The first shot is never the right one, nor the second, for that matter. The correct design surfaces gradually, and i know i’ve got it right when unexpected extensions to the initially planned functionality just fit in smoothly, as if they had been foresighted (as an aside, i feel the same about good maths: everything finds its place in a natural way). When you work like that, you’re always rewriting code, and having unit tests at hand provides a reassuring feeling of not being throwing the baby with the bath during your rewriting frenzies.

Of course, there’s also a buzzword-compliant name for such rewritings (refactoring), and you can expend a few bucks to read some trivialities about all that. (Mind you, the two books i just despised have been widely acclaimed, even by people whose opinion i respect, so maybe i’m being unfair here—in my defense, i must say i’ve read (and paid) both of them, so, at least, my opinion has cost me money.)

Sunit in SqueakAnyway, books or not, the idea behind the JUnit movement is quite simple: write tests for every bit of working code you have, or, if you’re to stand by the TDD tenets (which i tend not to do), for every bit of code you plan to write. As is often the case, the Java guys where not inventing something really new: their libraries are a rip-off of the framework proposed by Kent Beck for Smalltalk. Under the SUnit moniker, you’ll find it in every single Smalltalk implementation these days. A key ingredient to these frameworks’ success is, from my point of view, their simplicity: you have a base test class from which to inherit basic functionality, and extend it to provide testing methods. Languages with a minimum of reflection power will discover and invoke those methods for you. Add some form of test runner, and childish talk about an always green bar, and you’ve got it. The screenshot on the left shows the new SUnit Test Runner in one of my Squeak 3.9 images, but you’ll get a better glimpse of how writing unit tests in Squeak feels like by seeing this, or this, or even this video from Stéphane Ducasse‘s collection.

Of course, you don’t need to use an object-oriented language to have a unit testing framework. Functional languages like Lisp provide even simpler alternatives: you get rid of base classes, exchanging them by a set of testing procedures. The key feature is not a graphical test runner (which, as any graphical tool, gets in the way of unattended execution: think of running your tests suites as part of your daily build), but a simple, as in as minimal as possible, library providing the excuse to write your tests. Test frameworks are not rocket science, and you can buy one as cheap as it gets: for instance, in C, i’m fond of MinUnit, a mere three-liner:

/* file: minunit.h */
#define mu_assert(message, test) do { if (!(test)) return message;  \
                                    } while (0)

#define mu_run_test(test) do { char *message = test(); tests_run++; \
                               if (message) return message; } while (0)

extern int tests_run;

(Let me mention in passing, for all of you non-minimalistic C aficionados, the latest and greatest (?) in C unit testing: libtap) Add to this a couple of Emacs skeletons and an appropriate script and you’re well in your way towards automated unit testing. From here, you can get fancier and add support for test suites, reporting in a variety of formats, and so on; but, in my experience, these facilities are, at best, trivial to implement and, at worst, of dubious utility. It’s the quality and exhaustiveness of the tests you write what matters.

Lisp languages have many frameworks available. The nice guys of the CL Gardeners project have compiled a commented list of unit testing libraries for Common Lisp. In Scheme you get (of course) as many testing frameworks as implementations. Peter Keller has written an R5RS compliant library that you can steal from Chicken. Noel Welsh’s SchemeUnit comes embedded into PLT, and the Emacs templates are already written for you (or, if you mileage varies and you’re fond of DrScheme, you can have a stylized version of the green bar too). Personally, i don’t use PLT, and find Peter’s implementation a bit cumbersome (meaning: too many features that i don’t use and clutter the interface). Thus, my recommendation goes to Neil van Dyke of quack‘s fame’s Testeez. Testeez is an R5RS (i.e., portable), lightweight framework that is as simple as possible. Actually, it’s simpler than possible, at least in my book. In my opinion, when a test succeeds it should write nothing to the standard (or error) output. Just like the old good unix tools do. I only want verbosity when things go awry; otherwise, i have better things to read (this kind of behaviour also makes writing automation and reporting scripts easier). So, as a matter of fact, I use a hacked version of Testeez which has customizable verbosity levels. It’s the same version that we use in Spells, and you can get it here. Also of interest are Per Bothner’s SRFI-64, A Scheme API for test suites and Sebastian Egner’s SRFI-78, Lightweight testing (both including reference implementations).

Lisp testing frameworks abound for a reason: they’re extremely useful, yet easy to implement. As a consequence, they’re good candidates as non-trivial learning projects. A nice example can be found in Peter Seibel’s Practical Common Lisp (your next book if you’re interested in Common Lisp), which introduces macro programming by implementing a decent testing library. In the Scheme camp, Noel discusses the ups and downs of DSL creation in an article describing, among other things, the SchemeUnit implementation. A worth reading, even for non-beginners.

Once you settle on a test framework and start writing unit tests, it’s only a question of (short) time before you’re faced with an interesting problem, namely, to really write unit tests. That is, you’re interested in testing your functions or classes in isolation, without relying on the correctness of other modules you’ve written. But of course, your code under test will use other modules, and you’ll have to write stubs: fake implementations of those external procedures that return pre-cooked results. In Lisp languages, which allow easy procedure re-definition, it’s usually easy to be done with that. People get fancier, though, specially in object-oriented, dynamic languages, by using mock objects. The subject has spawn its own literature and, although i tend to think they’re unduly complicating a simple problem, reading a bit about mockology may serve you to discover the kind of things that can be done when one has a reflective run-time available. Smalltalk is, again, a case in point, as Sean Mallory shows in his stunningly simple implementation of Mock Objects. Tim Mackinnon gets fancier with his SMock library, and has coauthored a very interesting article entitled Mock Roles, Not Objects, where mock objects are defined and refined:

a technique for identifying types in a system based on the roles that objects play. In [9] we introduced the concept of Mock Objects as a technique to support Test-Driven Development. We stated that it encouraged better structured tests and, more importantly, improved domain code by preserving encapsulation, reducing dependencies and clarifying the interactions between classes. [...] we have refined and adjusted the technique based on our experience since then. In particular, we now understand that the most important benefit of Mock Objects is what we originally called interface discovery [...]

An accompanying flash demo shows SMock in action inside Dolphin Smalltalk. The demo is very well done and i really recommend taking a look at it, not only to learn to use Mock Objects, but also as a new example of the kind of magic tricks allowed by Smalltalk environments. Albeit not as fanciful, Objective-C offers good reflective features, which are nicely exploited in OCMock, a library which, besides taking advantage of Objective-C’s dynamic nature, makes use of the trampoline pattern (close to the heart of every compiler implementer) “so that you can define expectations and stubs using the same syntax that you use to call methods”. Again, a good chance to learn new, powerful dynamic programming techniques.

As you can see, writing tests can be, a bit unexpectedly, actually fun.

Tags: , , , , , ,

Lisp copycat

Another bit of newslore, this time provoked by Kent Pitman‘s article The Best of Intentions, Equal Rights–and Wrongs–in Lisp, which discusses the fine print of equality and copy operations in Lisp. Pascal Bourguignon has written a beautiful news post at comp.lang.scheme that shows how each of the possible meanings of copy mentioned in Pitman’s article can be implemented in Scheme, complete with nice ASCII-art box diagrams. Recommended reading.

Tags: , ,

Persistent Joy

In the comments section of The Joy of REPL, a reader is posing an interesting question: how do i make my joy persistent? Or, in her words,

Dumb question – you are happily programming in the environment, and the lights go out. Have you lost your state?
How do you save “source” code? I’m interested from the angle of irb, as I like ruby. I still think in the mode of writing the source in an editor, checking it in, etc.
I can’t seem to imagine this environment in terms of day to day work, esp with a development group.

Managing persistence depends largely on your development environment. But of course, the primary method is the traditional one: you write files. You don’t need to literally type your code at the interpreter’s prompt. Any decent editor will let you send to the interpreter expressions written (and, eventually, saved) in any editing buffer. Emacs excels in this aspect, specially if you’re on Lisp and use Slime (or its cousin slime48, which works on scheme48). You can see it in action in Marco Baringer’s excellent tutorial video (bittorrent available here). The important thing to keep in mind is that you need the ability to evaluate individual expressions (as opposed to loading files as a whole), and this is made possible by the joint work of your language’s runtime support and your editor. I’m not a Ruby user, but i bet Emacs or vim, among others, give you similar facilities. That said, i would be surprised if they are as impressive as Slime’s. Because Slime is cheating: it interacts with a programming system (namely, Common Lisps’) that does its very best to allow an incremental, organic development style. How so?

Well, as soon as you go beyond writing little toy snippets and into serious (as in bigger) programs, you’ll need some kind of module system, in the sense of a way of partitioning your namespace to avoid name collisions. Every language out there provides such a mechanism in one way or the other (and Scheme famously provides as many ways as there are implementations; more on this below). Therefore, to keep enjoying our interactive way of life, we need that the interpreter and the editor cooperate to evaluate our code in the correct namespace. Common Lisp’s module system is based on packages. Each symbol known to the system belongs to one of them, and it is customary to begin your files with a form that informs whoever is interested to what package the following code belongs into… and the editor/interpreter team are definitely interested: expressions sent from a buffer to the REPL are evaluated in the correct context. Again, i don’t know whether Ruby or Python offer this synergistic collaboration, but i know that you definitely need it to attain the Joy of REPL.

Common Lisp is not unique in this regard. In the Scheme world, scheme48′s module system was also designed with interactive, incremental development in mind, and taking advantage of it in Emacs required an, in a sense, almost straightforward (but, by all means, worthy) effort (thanks Taylor and Jorgen). As an aside, this is what makes s48 my preferred scheme and keeps me away from otherwise remarkable systems like PLT. (And this is why the current R6RS standard module system proposal is a show-stopper: if you happen to have a friend in the committee, please write him and include a link to Taylor Campbell’s alternative proposal and its accompanying rationale.)

Thus, when lights come back, you recover your previous environment by reloading your files. Good module systems provide means to streamline this operation, typically (but not always) by storing the package definitions in separate files. But this is still a nuisance, isn’t it? I must wait to all my files being reloaded and maybe byte-compiled… Don’t despair, there are better ways. Most CL implementations and several Schemes (MIT/GNU Scheme and, again, scheme48 come to mind) allow you to save your complete state, at any time, in what is called and image file. This image contains a binary snapshot of the interpreter’s state, and you can reload it at any later time. Being a binary representation, it will come to life blazingly fast. Besides Lisp, Smalltalk is the paradigmatic (and possibly pioneer, but i’m not 100% sure on this) image-based language: for instance, in Squeak, the only way to launch the environment is loading a previously saved image, which contains detailed information of your previous state (including the graphical environment). In this sense (and many others), Smalltalk is a dynamic programmer’s dream come true.

Image files make things even better, but not perfect: you still need to save your state every now and then. In an ideal world, persistence should be automatic, managed behind the scenes by the system, even by the operating system. Just like the garbage collector we have come to know and love in our dynamic environments manages memory for us. This nirvana is called Orthogonal Persistence, but unfortunately, we’re not there yet. I first heard of OP from the guys of the Tunes project, where François-René Bân Rideau and friends have envisioned what i view as the ideal computing environment. Unfortunately, up to this day it remains in the Platonic realm of the ideals (but this doesn’t prevent their having one of the best online knowledge bases on computer science). Another interesting project in this regard, with actually running code that may interest the pythonistas among you, is Unununium, an operating system built around the idea of orthogonal persistence. Finally, in this context it is also worth mentioning again Alan Kay’s brainchild Squeak, which provides an environment that, without being an entire OS, in many ways isolates you into a wonderland of its own.

Tags: , , , , , ,


Honestly, i’m a little bit surprised each time (and that means often) someone complains about Common Lisp not having libraries. In my experience, there’s lot of choice out there, and everytime i’ve looked for something, i’ve found it. Sometimes is beta, and others it needs some work, but hey, i like it when someone calls me a hacker.

And even more often i discover cool libraries that i don’t need but that is nice to have around and bookmark for the future. Like lemonodor’s CLAIM 1.2, recently updated and put into shape. CLAIM is an AIM CL library, and the new version supports the TOC2 AOL protocol (for a related lib supporting Jabber, see cl-xmlpp), and you can download CLAIM 1.2 from its project page, or you can ASDF-INSTALL it.

You can see it in action, on OS X, below:

? (gossip-bot:start-gossip-bot "mysuername" "mypassword")

claim gossip bot

Tags: ,

Installing (i)Maxima on OS X

I got interested again in Common Lisp a few months ago via the Maxima Computer Algebra System, an evolution of the DOE Macsyma project. Maxima is simply awesome, and a homage to the power of Lisp. Actually, i was looking for tensor algebra packages. Maxima does that and a lot more via its many interfaces.

Of course, there are several interaction modes to embed Maxima sessions into Emacs. The nicest one is, in my opinion, imaxima, which typesets Maxima’s output using LaTeX. Bill Clementson has published a tutorial on how to get Maxima running on OS X, using SBCL with imaxima and gnuplot. I followed the instructions and finally installed the whole pack, but only after a few tweaks that i’m posting below to save you the time of rediscovering them:

  • I’ve got SBCL 0.9.8 installed via Darwin Ports, and download the latest Maxima source tarball (5.9.2). There is a Maxima Darwin Port, but it uses CLISP.
  • The familiar configure/make/make install chore works without problems. But if then you try to run maxima, you’ll get an error from SBCL:
    fatal error encountered in SBCL pid 7275:
    more than one core file specified

    This is because /opt/local/bin/sbcl is actually a shell script which runs sbcl.bin with a predefined core. In turn, maxima is a shell script which invokes sbcl. So the fix is easy: create a sbcl.max containing something like

    /opt/local/bin/sbcl.bin "$@"

    and modify /usr/local/bin/maxima to invoke sbcl.max (search for exec “sbcl”). After that, invoking maxima from the command line should work just fine.

  • Now for the Emacs interface. I use Carbon Emacs, which comes with imaxima installed. So i just needed to load it. There’s an info page that tells you how to do that. Follow the instructions… and you get an error: latex complains it doesn’t know anything about ‘pagecolor’. After a bit of fiddling, i fixed that by setting the appropriate custom variables. Here is my (i)maxima emacs configuration file
  • Optionally, install the breqn LaTeX package to get multiline equation display in imaxima. I just downloaded the package and put its contents into /opt/local/texmf-local/tex/latex/breqn. Afterwards, running ‘sudo texhash’ completes the installation (you will know it worked if ‘kpsewhich breqn.sty’ locates the package).
  • Maxima uses Gnuplot to render 2D and 3D graphs. No problem: Gnuplot is included in Darwin Ports, but make sure to use the x11 variant (i had it compiled with +no_x11 and used AquaTerm).

That’s it. Surely, the above are trivial details, but still may save you a handful of minutes that you can spend far better. For instance, playing with your fresh Maxima installation. Enjoy.

Tags: , , , ,


Get every new post delivered to your Inbox.

Join 42 other followers