Geiser

Update: Since a few months ago, Geiser has its own home in the interwebs.

I hope you’ll pardon a shameless plug of one of my latest hacks, Geiser, a new Emacs-Scheme interaction mode.

After having lots of fun implementing Fuel, i was left with a lot of Elisp code that, i realized, could be easily reused for languages other than Factor. I also decided that it was high time to stop whining about Scheme environments not being dynamic enough and do something about it. As they say, talk is cheap.

Thusly, Geiser was born, and today it came of 0.0.1 0.0.2 age, as per the git tag in its repository.

If you know Slime or Fuel, you know what Geiser aims at: a pleasant, live interaction with Scheme without leaving Emacs. This first release is by no means there yet, but you’ll already find some joy using it: module-aware and incremental evaluation, jumping to definitions, dynamic symbol completion and automatic signature display in the echo area are the highlights.

Currently, Geiser supports two Scheme implementations: Guile and PLT. Yeah, i like both (and several others). It’s been really fun discovering how to tweak them to obtain the metadata i wanted, and their developers and users have been helpful, kind and patient to no end. A big thanks to them (you know who you are), and my promise that i’ll keep nagging.

Both Guile and PLT have given me many pleasant surprises. Guile is the most common-lispy Scheme around, and the recent hard work and improvements by the likes of Andy Wingo is making much of the criticism it memetically receives just moot. And PLT is by no means the rigid system i thought it was, while retaining all the great features i knew it had. Try any of them, with or without Geiser: they’re real fun.

Back to Geiser, this being an alpha release, there’s no screencasts or real documentation… the code just escaped leaving a blood trail, you know. Maybe one day it’ll have a webpage, a mailing list and even users. In the meantime, if you’re brave enough, the README will hopefully do; and, of course, the code:

  git clone git://gitorious.org/geiser/mainline.git

(If you’re not brave enough, but curious, the code is browsable here.)

Needless to say, all kinds of comments, criticisms and laundry lists are welcome and, actually, encouraged.

Happy scheming!

Easter egg

After some years playing with other implementations, i’ve been using the PLT-Scheme suite assiduously during the last weeks (more on what for in future posts), and it’s becoming, slowly but surely, my default implementation. Eli Barzilay‘s extensions to make MzScheme play nice with Emacs were the trigger, but i must say that DrScheme’s macro stepper, debugger and syntax checker (all of which i had not seriously used before) are fine pieces of hackery. If you’re into scheme, you really should take a look at them. Another strong point of PLT scheme is its excellent documentation, which comes with a browser called HelpDesk. I use it all the time, and this morning it saluted me with an unexpected background:

200704090100

Let me join the toast, and wish Shriram a happy birthday too! (and, while i’m at it, thanks also for your wonderful PLAI). It must be nice being a PLTer, mustn’t it?

A companion emacs blog

In order to avoid boring those of you using other editors, IDEs and/or operating systems, i’ve put together a companion emacs blog. Those of you so inclined can find there programming aids, elisp packages i like or have written, some tips and tricks, and other silly stuff.

Know your tools

Just to illustrate what i meant the other day when i exhorted you to pick a powerful enough editor, and, most importantly, learn how to use it, i think this Emacs Screencast is quite adequate. It’s made by a Ruby coder, but note that Emacs plays these kind of tricks (and many more) for virtually any language out there.

The point is, if your environment is not powerful enough to make this kind of things, you should be looking for something better. The keyword here is extensibility. If your environment is not able to automate a task, you should be able to extend it; and the best way i can imagine is having a full featured lisp at my disposal when i need to write an extension (here is the list of Emacs extensions used in the screencast, by the way). Phil Windley has also made a similar point recently, in his When you pick your tools, pick those that can build tools mini-article.

And of course, when your editor and your dynamic language conspire to provide an integrated environment, you’ve reached nirvana: that’s the case of Common Lisp and Slime, as shown in other widely known videos.

As said, the message is not that you should use Emacs, but that you should use the right tool. For instance, PLT Scheme users have other alternatives, like DrScheme, which can also be easily extended, as beautifully demonstrated by DivaScheme, an alternative set of key-bindings for DrScheme that you can see in action below:

You see, as programmers, we’re lucky: we can build our tools and make they work exactly the way we want. If your environment precludes your doing so in any way, something’s wrong with it, no matter how much eye-candy it uses to hide its deficiencies. (Yes, i have some examples in mind ;-) .)

Update: Phil has just posted an interesting follow-up to his tools article. His closing words nicely summarise what this post of mine is all about:

Note that I’m not writing all this to convert the dedicated vi users or anyone else. If you’ve got something that works for you, then good enough. But if you’re searching for a editor that’s programmable with plenty of headroom, then give emacs a try. There’s a steep learning curve, but the view is great from the top (or even half way up)!

Exactly!

Programmers love writing (and mocking) tests

Aggressive unit testing (also known, by those willing to earn some easy money, by the buzzword test-driven development) is about the only practice from XP that i embraced happily in my days as a dot com employee (it was around 2000, and i was working for one of those companies that got lots of funding from avid but hopelessly candid investors to do… well, to do basically nothing).

Those were my long gone Java days, and i got test infected all the way down. That writing tests is a good practice is hardly news for any decent programmer, but what i specially liked about putting the stress on writing lots of tests was discovering how conductive the practice is for continuous code rewriting (count that as the second, and last, extreme practice i like). I had yet to discover the real pleasures of bottom-up development in REPL-enabled languages, but even in Java my modus operandi consisted basically in starting with concrete code (sometimes even a mere implementation detail) and make the application grow up from there. Of course, some brainstorming and off-the-envelop diagramming was involved, but the real design, in my experience, only appears after fighting for a while with real code. The first shot is never the right one, nor the second, for that matter. The correct design surfaces gradually, and i know i’ve got it right when unexpected extensions to the initially planned functionality just fit in smoothly, as if they had been foresighted (as an aside, i feel the same about good maths: everything finds its place in a natural way). When you work like that, you’re always rewriting code, and having unit tests at hand provides a reassuring feeling of not being throwing the baby with the bath during your rewriting frenzies.

Of course, there’s also a buzzword-compliant name for such rewritings (refactoring), and you can expend a few bucks to read some trivialities about all that. (Mind you, the two books i just despised have been widely acclaimed, even by people whose opinion i respect, so maybe i’m being unfair here—in my defense, i must say i’ve read (and paid) both of them, so, at least, my opinion has cost me money.)

Sunit in SqueakAnyway, books or not, the idea behind the JUnit movement is quite simple: write tests for every bit of working code you have, or, if you’re to stand by the TDD tenets (which i tend not to do), for every bit of code you plan to write. As is often the case, the Java guys where not inventing something really new: their libraries are a rip-off of the framework proposed by Kent Beck for Smalltalk. Under the SUnit moniker, you’ll find it in every single Smalltalk implementation these days. A key ingredient to these frameworks’ success is, from my point of view, their simplicity: you have a base test class from which to inherit basic functionality, and extend it to provide testing methods. Languages with a minimum of reflection power will discover and invoke those methods for you. Add some form of test runner, and childish talk about an always green bar, and you’ve got it. The screenshot on the left shows the new SUnit Test Runner in one of my Squeak 3.9 images, but you’ll get a better glimpse of how writing unit tests in Squeak feels like by seeing this, or this, or even this video from Stéphane Ducasse‘s collection.

Of course, you don’t need to use an object-oriented language to have a unit testing framework. Functional languages like Lisp provide even simpler alternatives: you get rid of base classes, exchanging them by a set of testing procedures. The key feature is not a graphical test runner (which, as any graphical tool, gets in the way of unattended execution: think of running your tests suites as part of your daily build), but a simple, as in as minimal as possible, library providing the excuse to write your tests. Test frameworks are not rocket science, and you can buy one as cheap as it gets: for instance, in C, i’m fond of MinUnit, a mere three-liner:

/* file: minunit.h */
#define mu_assert(message, test) do { if (!(test)) return message;  \
                                    } while (0)

#define mu_run_test(test) do { char *message = test(); tests_run++; \
                               if (message) return message; } while (0)

extern int tests_run;

(Let me mention in passing, for all of you non-minimalistic C aficionados, the latest and greatest (?) in C unit testing: libtap) Add to this a couple of Emacs skeletons and an appropriate script and you’re well in your way towards automated unit testing. From here, you can get fancier and add support for test suites, reporting in a variety of formats, and so on; but, in my experience, these facilities are, at best, trivial to implement and, at worst, of dubious utility. It’s the quality and exhaustiveness of the tests you write what matters.

Lisp languages have many frameworks available. The nice guys of the CL Gardeners project have compiled a commented list of unit testing libraries for Common Lisp. In Scheme you get (of course) as many testing frameworks as implementations. Peter Keller has written an R5RS compliant library that you can steal from Chicken. Noel Welsh’s SchemeUnit comes embedded into PLT, and the Emacs templates are already written for you (or, if you mileage varies and you’re fond of DrScheme, you can have a stylized version of the green bar too). Personally, i don’t use PLT, and find Peter’s implementation a bit cumbersome (meaning: too many features that i don’t use and clutter the interface). Thus, my recommendation goes to Neil van Dyke of quack‘s fame’s Testeez. Testeez is an R5RS (i.e., portable), lightweight framework that is as simple as possible. Actually, it’s simpler than possible, at least in my book. In my opinion, when a test succeeds it should write nothing to the standard (or error) output. Just like the old good unix tools do. I only want verbosity when things go awry; otherwise, i have better things to read (this kind of behaviour also makes writing automation and reporting scripts easier). So, as a matter of fact, I use a hacked version of Testeez which has customizable verbosity levels. It’s the same version that we use in Spells, and you can get it here. Also of interest are Per Bothner’s SRFI-64, A Scheme API for test suites and Sebastian Egner’s SRFI-78, Lightweight testing (both including reference implementations).

Lisp testing frameworks abound for a reason: they’re extremely useful, yet easy to implement. As a consequence, they’re good candidates as non-trivial learning projects. A nice example can be found in Peter Seibel’s Practical Common Lisp (your next book if you’re interested in Common Lisp), which introduces macro programming by implementing a decent testing library. In the Scheme camp, Noel discusses the ups and downs of DSL creation in an article describing, among other things, the SchemeUnit implementation. A worth reading, even for non-beginners.

Once you settle on a test framework and start writing unit tests, it’s only a question of (short) time before you’re faced with an interesting problem, namely, to really write unit tests. That is, you’re interested in testing your functions or classes in isolation, without relying on the correctness of other modules you’ve written. But of course, your code under test will use other modules, and you’ll have to write stubs: fake implementations of those external procedures that return pre-cooked results. In Lisp languages, which allow easy procedure re-definition, it’s usually easy to be done with that. People get fancier, though, specially in object-oriented, dynamic languages, by using mock objects. The subject has spawn its own literature and, although i tend to think they’re unduly complicating a simple problem, reading a bit about mockology may serve you to discover the kind of things that can be done when one has a reflective run-time available. Smalltalk is, again, a case in point, as Sean Mallory shows in his stunningly simple implementation of Mock Objects. Tim Mackinnon gets fancier with his SMock library, and has coauthored a very interesting article entitled Mock Roles, Not Objects, where mock objects are defined and refined:

a technique for identifying types in a system based on the roles that objects play. In [9] we introduced the concept of Mock Objects as a technique to support Test-Driven Development. We stated that it encouraged better structured tests and, more importantly, improved domain code by preserving encapsulation, reducing dependencies and clarifying the interactions between classes. [...] we have refined and adjusted the technique based on our experience since then. In particular, we now understand that the most important benefit of Mock Objects is what we originally called interface discovery [...]

An accompanying flash demo shows SMock in action inside Dolphin Smalltalk. The demo is very well done and i really recommend taking a look at it, not only to learn to use Mock Objects, but also as a new example of the kind of magic tricks allowed by Smalltalk environments. Albeit not as fanciful, Objective-C offers good reflective features, which are nicely exploited in OCMock, a library which, besides taking advantage of Objective-C’s dynamic nature, makes use of the trampoline pattern (close to the heart of every compiler implementer) “so that you can define expectations and stubs using the same syntax that you use to call methods”. Again, a good chance to learn new, powerful dynamic programming techniques.

As you can see, writing tests can be, a bit unexpectedly, actually fun.

Tags: , , , , , ,

Developing LISA Pathfinder

Lisa Since last June, i’m working on a project called LISA Pathfinder, a forerunner of the future space-based LISA Gravitational Wave Detector. Pathfinder is a joint NASA/ESA effort, and Spain is in charge of developing both the hardware and software for the so-called Data Management Unit flying in the mission’s only satellite.

I’m mostly on the software part. The team is small and we have no architecture astronauts around: everyone’s is trying its very best to make this fly, instead of playing the corporate naming game. And i even have some old good friend in the team.

As for the more techie stuff, the good news is that this is a pretty challenging project: we’re programming an embedded system that will be running on a 14MHz ERC32 chip and one of its components must fit in a mere 32K PROM; a second component will have at its disposal the huge amount of 1Mb RAM, enough to run a real-time kernel with leisure. The not-so-good news is that this is the realm of C, so, in principle, one does not expect dynamic languages all over the place. And yet, we have managed to sneak into our tool chain some cool stuff.

C++ was discarded very early, and with it all the associated UML, RUP, XML and object-oriented-as-the-cure-of-all fanfare. Been there, done that. All this industry-standard nonsense serves mostly, in my experience, to convey a warm fuzzy feeling to incompetent managers who hear about technology in cool IT sites for the Enterprise Professionals and the Architect in you. Fortunately, my immediate bosses are not that clueless, and we opted for old good C and a Yourdon-like design methodology (PDF) invented by Ward and Mellor in the eighties to model real-time systems. Funny how some people look at you over their shoulder, almost with contempt, for your using such obviously outdated stuff. Architecture Astronauts. Again.

But don’t let me started. As i was saying, C makes for a good high-level assembly when used properly, and we’re trying hard to use it properly. Since we’re very close to the metal, we have also, every now and then, very interesting excursions into RISC assembly. I cannot possibly overstate how enriching working this close to the hardware is for any programmer. Don’t miss the chance when it appears. An assembly language, specially if it is of the RISC family, belongs into your toolbox, even if you’re programming in Lisp.

Emacs is all around. We have used its almost boundless plasticity to automate every imaginable development task, tailored to the tiniest detail. Writing a handful of Elisp functions is usually all that’s needed to adapt the environment to you exact necessities. It’s like having an IDE written for you and your project’s needs. You adapt your environment to your problem, not the other way around, gluing together all the tools you use (version control, document generators, linting tools, you name it) in the smoothest way. No other IDE of jour, however flashy or dead-on-easy-to-use.

As for version control, we bobbed between darcs and bazaar for a while, and finally chose the latter. I’m not sure it was the right decision. Since then, i’ve grown fonder of darcs’ elegance, its simplicity without sacrificing power. Moreover it has some features than i miss in bazaar, most notably the ability to record patches on a per hunk basis. In its day, we opted for bazaar because of its capacity of versioning symbolic links and, mostly, its great Emacs integration via xtla. CVS and svn where discarded from the start because we think that distribution and patch-oriented commits are indispensable in any decent SCM.

I would have loved using Conjure as our build system, but, unfortunately, we are by no means there yet. So we opted for Scons, which makes for a pretty good Make replacement in terms of versatility and abstraction power (which boils down, at the end of the day, to reusability). Besides, albeit not as beautiful as Scheme, Python is a decent language and we already have some knowhow in our team (the rest of us have now the excuse for learning it—let me say in passing than a continuous itch for learning is top on my list of the Top Ten Qualities of Good Programmers). Python also is our language for the infrastructure chores that Emacs does not handle, namely, all the batch tasks like automatic test suite running and reporting or our nightly build.

We also use Python to program the hardware simulators needed for the integration and system tests. As a matter of fact, our test cases are Python scripts. The degree of flexibility we gain by writing them in a dynamic language is so huge and evident that i cannot help getting upset everytime an astronaut passing by asks me, with an skeptical look and suspicion laden undertones, why aren’t we using C or C++.

Last but not least, all our documents are written in LaTeX. Those of you working in corporate environments involving lots of companies will surely understand how happy i feel about this. I spent a weekend writing a LaTeX document class that mimics the Word style everyone else in the project is using. Now, our documents not only look better. We have them in version control, like the rest of the code, and a growing library of scripts takes care of generating cross-indexes, synchronising code and documentation, checking for untraced requirements, generating the proverbial traceability matrices, maintaining centralised lists of acronyms and bibliography, and so on and so forth. And of course, i don’t have to leave Emacs to write my docs.

I can think of better projects, and better ways of getting things done, but, all in all, this one is not that bad.

Thanks for reading.

Tags: , ,

Installing (i)Maxima on OS X

I got interested again in Common Lisp a few months ago via the Maxima Computer Algebra System, an evolution of the DOE Macsyma project. Maxima is simply awesome, and a homage to the power of Lisp. Actually, i was looking for tensor algebra packages. Maxima does that and a lot more via its many interfaces.

Of course, there are several interaction modes to embed Maxima sessions into Emacs. The nicest one is, in my opinion, imaxima, which typesets Maxima’s output using LaTeX. Bill Clementson has published a tutorial on how to get Maxima running on OS X, using SBCL with imaxima and gnuplot. I followed the instructions and finally installed the whole pack, but only after a few tweaks that i’m posting below to save you the time of rediscovering them:

  • I’ve got SBCL 0.9.8 installed via Darwin Ports, and download the latest Maxima source tarball (5.9.2). There is a Maxima Darwin Port, but it uses CLISP.
  • The familiar configure/make/make install chore works without problems. But if then you try to run maxima, you’ll get an error from SBCL:
    fatal error encountered in SBCL pid 7275:
    more than one core file specified
    

    This is because /opt/local/bin/sbcl is actually a shell script which runs sbcl.bin with a predefined core. In turn, maxima is a shell script which invokes sbcl. So the fix is easy: create a sbcl.max containing something like

    #!/bin/sh
    /opt/local/bin/sbcl.bin "$@"
    

    and modify /usr/local/bin/maxima to invoke sbcl.max (search for exec “sbcl”). After that, invoking maxima from the command line should work just fine.

  • Now for the Emacs interface. I use Carbon Emacs, which comes with imaxima installed. So i just needed to load it. There’s an info page that tells you how to do that. Follow the instructions… and you get an error: latex complains it doesn’t know anything about ‘pagecolor’. After a bit of fiddling, i fixed that by setting the appropriate custom variables. Here is my (i)maxima emacs configuration file
  • Optionally, install the breqn LaTeX package to get multiline equation display in imaxima. I just downloaded the package and put its contents into /opt/local/texmf-local/tex/latex/breqn. Afterwards, running ‘sudo texhash’ completes the installation (you will know it worked if ‘kpsewhich breqn.sty’ locates the package).
  • Maxima uses Gnuplot to render 2D and 3D graphs. No problem: Gnuplot is included in Darwin Ports, but make sure to use the x11 variant (i had it compiled with +no_x11 and used AquaTerm).

That’s it. Surely, the above are trivial details, but still may save you a handful of minutes that you can spend far better. For instance, playing with your fresh Maxima installation. Enjoy.

Tags: , , , ,

Emacs on OS X

Since a few weeks ago, OS X is again my primary development environment. That means that i’ve been preparing up a proper Emacs setup, including quack, slime, slime48, paredit and some C stuff. As customary, i’ve put my configuration (split into small files) under configuration control: you can browse the darcs repository thanks to the excellent darcsweb.

The main file (i.e., the one i link my .emacs to) is emacs.el, and all those other little ones named jao-something include (disposable) configuration. This configuration works in my Tiger 10.4.3 with the Carbon Emacs package.

Happy hacking!

Tags: , , ,

Follow

Get every new post delivered to your Inbox.

Join 26 other followers