As simple as possible…

… but not simpler. Einstein’s (attributed) quotation has become an aphorism, taken for granted by every mathematician or physicist i’ve ever met (to mention two kinds of people i’ve been frequently involved with). One would expect the same attitude from a community that invented the term ‘no silver bullet’, and yet, since i got into computer science, first for fun and later on for a living, i’ve found lots of people with, er, a different viewpoint. Take for instance this excerpt from Lisp is sin, a widely cited and commented article by Sriram Krisnan:

In Visual Studio, we look at 3 distinct categories of programmers. We call them Mort, Elvis and Einstein – the 3 personas we take into consideration whenever we create any developer technology. What is difficult for some of us geeks to comprehend sometimes is – all 3 of them are equally important. When you look at technology like Windows Workflow Foundation, one of the driving forces was to let non-geeks build software. Not everyone needs to be a Raymond Chen or a Dave Cutler. Not everyone needs to understand the difference between the various GC algorithms. However, everyone needs the ability to be productive. And everyone needs to be able to get software working without needing a CS degree.

We cannot afford to restrict software development only to those who know Windows kernel internals or those who can understand what a continuation is. It’s not that other people are not smart – they just have better things to do. That’s the key piece of understanding I find missing sometimes.

Nonsense, if you ask me. And yet, i’ve been hearing this same argument, in different dressings, once and again since i got into computer science. Let’s apply the same line of reasoning to other disciplines, and see how well it fares:

Hey Albert, your General Relativity is awesome but, you know, with all that jazz about differential geometry and curved spacetimes, it’s too hard; we’re not as smart as you, pal, so we’d better use Newtonian or Aristotelian mechanics to calculate those GPS satellite orbits and get going with other important things we need to do. Hope you understand, Albert.

Well Santiago, your ideas about neurons and surgery sound pretty deep and mystifying, but please, think of the Galens among us: we don’t have the time to investigate every new fad, and, anyway, we wouldn’t understand it if we did. Know what? We’ll keep using our old good cures and stay away from untrodden venues. Our healing parchments are a bit of a hack, but they get the work done… most of the time, that is.

Does it make any sense? Now, maybe you think that i am exaggerating, and that the comparisons above are stretching the point a bit too far. If so, take a second to look back to the people that made your nice computing environment possible. Take a look at Charles Babagge visions; read about Alan Turing and Alonzo Church or John von Neumann; admire the elegance of McCarthy’s original LISP (1960); prepare to be surprised with the things the people in Dough Engelbart’s Augmentation Research Center were doing during the sixties; try to find a modern drawing program that matches Sketchpad‘s algorithms (or see it in action in this presentation by Alan Kay); follow the fascinating development of the overlapping windows interface, hand in hand with Smalltalk history back at Xerox PARC, and do it from the horse’s mouth; feel the thrill of the people that went beyond Xerox’s big wigs’ shortsightedness and on to making a dent in the universe: it was 1984, that same year the lisp machine wars culminated in the creation of the GNU project, which was all about ideals, about empowering people, about freedom. When you’re done, tell me whether i’m going overboard when making parallelisms between computer science and physics or medicine!

All those people had a vision, a dream, and pursued it with an amazing display of hard work, stubbornness and intelligence. They took no prisoners, and by the late eighties had pushed that new thing called, for want of a better name, Computer Science to its modern standards.

Then winter came. Not just the AI winter. Compare the swift pace of CS developments during the 1960-80 period with the subsequent advancements in the field. We’re using the same metaphors, the same kind of systems that we inherited from those guys and gals. Why, we even envy the power of Lisp Machines these days. It’s been a long, cold winter for CS. And the main reason was the appearance of the mentality that i’m criticising in this post, what Alan Kay aptly calls, in a recent interview, a pop culture of computers:

Perhaps it was commercialization in the 1980s that killed off the next expected new thing [...] But a variety of different things conspired together, and that next generation actually didn’t show up. One could actually argue—as I sometimes do—that the success of commercial personal computing and operating systems has actually led to a considerable retrogression in many, many respects.
You could think of it as putting a low-pass filter on some of the good ideas from the ’60s and ’70s, as computing spread out much, much faster than educating unsophisticated people can happen. In the last 25 years or so, we actually got something like a pop culture, similar to what happened when television came on the scene and some of its inventors thought it would be a way of getting Shakespeare to the masses. But they forgot that you have to be more sophisticated and have more perspective to understand Shakespeare. What television was able to do was to capture people as they were.
So I think the lack of a real computer science today, and the lack of real software engineering today, is partly due to this pop culture.

Dead on, i say. People advocating about making programming simpler than possible are the hallmark of this pop culture. And when corporate and economic interests enter the picture, things get even worse. The Lisp is sin essay goes on to say:

I frequently see on Slashdot “Windows is designed for stupid users”. That is quite insulting to the millions of moms and dads, teachers and laywers and people from other walks of life who use Windows or even the Mac. If we mandated that every new user understand Windows’ command line syntax or Emacs, we would have failed as an industry – we would have locked out the rest of the world.

In my opinion, this totally misses the point. There’s nothing wrong in making computers simpler to users. On the contrary, that’s probably what this endeavour is all about. Alan Kay saw it, Apple took head with its computer for the rest of us mantra. But it does not follow that there must be a CS for the rest of us. Making all this amazing technology possible takes effort, and needs a high level of sophistication. Alan didn’t try to create systems usable by children inventing PHP. He created Smalltalk striving to improve Lisp, he studied Piaget and Papert, he has degrees in maths and biology. And he needed all that, and then more.

The (trivial) point i’m trying to make is that not everybody has what it takes to be a programmer. Just as not everybody can be a singer or a painter (as an aside, i tend to agree with the opinions that link programming and art). As a matter of fact, good programmers are rare and need a quite peculiar combination of skills and talents. Donald Knuth has put it far better than i could in the essay Theory and Practice, II (from his Selected Papers on Computer Science):

The most important lesson [after developing TeX], for me, was that software is hard; and it takes a long time. From now on I shall have significantly greater respect for every successful software tool that I encounter.[...]
Software creation not only takes time, it’s also much more difficult that I thought it would be. Why is this so? I think the main reason is that a longer attention span is needed when working on a large computer program than when doing other intellectual tasks. A great deal of technical information must be kept in one’s head, all at once, in high-speed random-access memory somewhere in the brain.

We don’t solve the painter’s problem by complaining that perspective is hard to grasp and people should better use flat icons. In the same way, we shouldn’t be claiming for a trivialisation of CS both in academia and in the industry. The we would have failed in the industry bit in the Sriram quote above is really sad: we’re sacrificing an admirable legacy in the name of industry and corporate profit. The most remarkable feat of our current industry leaders is to have convinced the rest of the world that having software systems that eat incredible amounts of resources and explode without reason every now and then is part of an acceptable, even top-notch, technology. Fortunately, other disciplines show far more respect for the people that, ultimately, is paying their wages.

If you’ve got this far, you already have one of the qualities needed to become a programmer: stamina. You’ll need more. Be prepared to study hard, to learn maths, to live in abstract worlds. If you feel that you have “more important things to do”, well, that’s all very well, but don’t ask the rest of us to dumb down the subject so that everybody can be a programmer. Lisp is not a sin. The sin would be to betray the dreams, ideals and hard work of the people that have taken us this far. We owe that to them, and to ourselves.

To end this never-ending diatribe, let me add a couple of things: first, i should apologize for taking Sriram as the scapegoat to a long honed rage: his essay contains many good points worth reading and mulling on; second, i hope you’re not thinking this is just an arrogant rant by an old fart: i’m not that old.

Tags: , ,

Pointers from Jocelyn Paine

I’ve found a message from Jocelyn Paine, the author of the AI Who’s Who that i recommended a few days ago, awaiting in my moderation queue. I think it deserves a post for its own, instead of a dark corner in the comments list. Here it is:

Since you liked my ‘rant’, may I point you at John Baez’s essay about overpriced science journals and how to fight them? I discovered it this morning. Baez is a superb mathematical physicist, and his site has lots of excellent exposition on physics, maths, and other topics – I took from it my September quote about the brain scan that suggests it’s not enjoyment that motivates fashion victims, but fear. I’d recommend his site to anyone who wants to know why they should learn category theory; or, as now, to anyone worried about keeping academic papers freely available.

I agree with Jocelyn’s opinions on overpriced journals, John Baez, and category theory, among many other issues. And, of course, the pointers above made for a quite interesting reading evening. Enjoy!.

Thank you, Jocelyn.

Tags: , , , ,

Recovering Lost Code Screencast (and Squeak porn)

As promised, James has just posted his Recovering Lost Code Screencast, which clearly demonstrates how powerful Smalltalk development environments are, and will give you a chance of seeing how VisualWorks looks like.

SqueakI must confess Smalltalk is deeply impressing me: i want one of those environments for Scheme! In the meantime, i content myself playing with Squeak, which has nothing to envy to VW, by the way. I used to complain about its looking childish. Besides silly, the complaint is baseless: see how my Squeak environment looks after installing the Skylark Theme

Tags: ,

Conjure becomes verbose

Conjure48′s development is evolving at a slow but steady pace. We are already able, for instance, to compile C/C++ code, including dynamic libraries, both in GNU/Linux and Mac OS X. And a build script for s42 (getting rid of Scons in the process) is on the way.

Our latest addition is rotty’s logging package, which is not Conjure-specific and very interesting, in and by itself. Actually, this new logging facility is part of Spells, thus being portable to several Scheme implementations. Please, follow the link above for rotty’s detailed report.

As an aside, and just in case you’re wondering how on earth Spells, s42 and Conjure relate to each other, here’s a quick summary of the stuff we’re trying to release:

  • Spells is a portable set of Scheme libraries, compatible with several Scheme implementations: Scheme48, PLT Scheme, Guile and Gauche are our current targets. Spells includes lots of utilities, from regular expressions to an implementation of T’s object system, a document generation system or a unit testing framework. Of course, we’re not generating all that stuff from scratch, but frequently adapting existing libraries to Spells’ portability framework. You can get a feeling of Spells breadth in this draft API documentation.
  • s42 is a collection of scheme48 libraries and extensions. To boot, it includes all Spells modules, and then much more (for instance, spenet, and implementation of rotty’s network API proposal).
  • s42 serves as the basis to s48-worlds, a framework aimed at supporting distributed s48 package repositories (a little bit like PLT’s PLaneT).
  • Conjure uses Spells for portability, and is the build system for s42. In other Schemes, it will run as a standalone make replacement, without any dependency on s42.

As you can imagine, there’s lots of room for new developers by here, whatever your Scheme of choice is. If you want to join the fun, drop any of us a mail, or look for us at #conjure or #scheme in Freenode.

Tags: , , ,

Posted in Hacks. 2 Comments »

Programming challenges

If you happen to like maths, and assuming you enjoy programming since you’re wandering through these whereabouts, here’s a very interesting challenge for you:
Project Euler
, hosted by mathschallenge.net, a site dedicated “to the puzzling world of mathematics”.

 Images EulerThe idea is pretty engaging: there’s a list of problems to be solved by a computer program in less than a minute. Each problem has an initial score, which decreases as it is solved by participants. For instance, problem number 1 asks you to add all the natural numbers below 1000 that are multiples of 3 or 5, has been solved by 899 people and scores 2 points, while to get the 20 points of problem 106 you’ll have to write a program that finds the minimum number of comparisons needed to identify special sum sets, a feat only accomplished by 48 participants. As of this writing, there are no unsolved problems, so, you see, the contest is challenging but doable.

Other cool services for participants include a discussion forum for each problem, to which you gain access once you solve it, and a table listing the programming languages used by successful eulerians. I’m sorry to report that C/C++ is, for what i’ve seen, always the one with more solvers, with Delphi usually in the second place and Java, Python and Ruby alternating in a distant third. Lisp and Haskell make timid appearances. No Smalltalk or Scheme, oh my.

However, there’s a surprising fact. It’s no wonder that C/C++ is the language with more winners, since it’s the most used (23% of participants). The same goes for the other forerunners. But, when you calculate the number of wins per user, the language of choice of the highest scoring users is, of all things, APL/J/K. Amusing.

Besides my pet ones, there are a couple of “languages” that i’d like to see in there: the terrific Computer Algebra Systems (to be featured in a blog near you any time soon) Axiom and Maxima, which seem the right tool for this kind of problems. If, like me, you think that accepting them would be a bit like cheating, don’t worry: they accept Mathematica as a programming language. And even Pencil & Paper, for all you old-school guys and gals.

So go create an account, pick your problems and there you’re: lots of fun.

Tags: , ,

Orthogonal Smalltalk

Yesterday, I was longing for a perfect, orthogonally persistent world. As it turns out, this world is here right now. James Robertson, over at Smalltalk Tidbits, Industry Rants, has posted a comment on my entry:

Well, in Smalltalk – certainly in VisualWorks and Squeak – even a power outage isn’t going to end up losing you any work (unless your HD dies simultaneously, of course). As it happens, each change you make to your image is saved off in a transaction log called the change file. When you restart the image, you can load the change file into a tool, and replay all (or a set of specific) actions so as to restore your state.

He goes on to promise suggest the possibility of a screencast on it by next week, so stay tuned! It the meantime, you can take a look at his screencast collection to see, if only second-hand, how developing in a truly dynamic environment feels.

Also of note is his next entry, Tools and Power, where James comments on how easy, well no, trivial incremental development is in Smalltalk, and points readers to a video to see it in action. The movie shows a debugging session in Seaside, a framework for developing web applications in Smalltalk. The developer is browsing a web page served by the system, and modifies the code generating the HTML with a new, unimplemented method. A debug trace ensues. Alt-Tab and he’s again into Squeak, with a debugger window ready. From there, it’s just a matter of two clicks to add the missing code. One more to proceed. Alt-Tab and the new page is there, in his browser. You really must see it.

SqueakI must really give Smalltalk a serious try. If you feel like me, Squeak is the obvious, open source alternative, but it’s not the only one: Cincom’s VisualWorks is free-as-in-beer for personal use, and the list goes on

Tags: ,

Persistent Joy

In the comments section of The Joy of REPL, a reader is posing an interesting question: how do i make my joy persistent? Or, in her words,

Dumb question – you are happily programming in the environment, and the lights go out. Have you lost your state?
How do you save “source” code? I’m interested from the angle of irb, as I like ruby. I still think in the mode of writing the source in an editor, checking it in, etc.
I can’t seem to imagine this environment in terms of day to day work, esp with a development group.

Managing persistence depends largely on your development environment. But of course, the primary method is the traditional one: you write files. You don’t need to literally type your code at the interpreter’s prompt. Any decent editor will let you send to the interpreter expressions written (and, eventually, saved) in any editing buffer. Emacs excels in this aspect, specially if you’re on Lisp and use Slime (or its cousin slime48, which works on scheme48). You can see it in action in Marco Baringer’s excellent tutorial video (bittorrent available here). The important thing to keep in mind is that you need the ability to evaluate individual expressions (as opposed to loading files as a whole), and this is made possible by the joint work of your language’s runtime support and your editor. I’m not a Ruby user, but i bet Emacs or vim, among others, give you similar facilities. That said, i would be surprised if they are as impressive as Slime’s. Because Slime is cheating: it interacts with a programming system (namely, Common Lisps’) that does its very best to allow an incremental, organic development style. How so?

Well, as soon as you go beyond writing little toy snippets and into serious (as in bigger) programs, you’ll need some kind of module system, in the sense of a way of partitioning your namespace to avoid name collisions. Every language out there provides such a mechanism in one way or the other (and Scheme famously provides as many ways as there are implementations; more on this below). Therefore, to keep enjoying our interactive way of life, we need that the interpreter and the editor cooperate to evaluate our code in the correct namespace. Common Lisp’s module system is based on packages. Each symbol known to the system belongs to one of them, and it is customary to begin your files with a form that informs whoever is interested to what package the following code belongs into… and the editor/interpreter team are definitely interested: expressions sent from a buffer to the REPL are evaluated in the correct context. Again, i don’t know whether Ruby or Python offer this synergistic collaboration, but i know that you definitely need it to attain the Joy of REPL.

Common Lisp is not unique in this regard. In the Scheme world, scheme48′s module system was also designed with interactive, incremental development in mind, and taking advantage of it in Emacs required an, in a sense, almost straightforward (but, by all means, worthy) effort (thanks Taylor and Jorgen). As an aside, this is what makes s48 my preferred scheme and keeps me away from otherwise remarkable systems like PLT. (And this is why the current R6RS standard module system proposal is a show-stopper: if you happen to have a friend in the committee, please write him and include a link to Taylor Campbell’s alternative proposal and its accompanying rationale.)

Thus, when lights come back, you recover your previous environment by reloading your files. Good module systems provide means to streamline this operation, typically (but not always) by storing the package definitions in separate files. But this is still a nuisance, isn’t it? I must wait to all my files being reloaded and maybe byte-compiled… Don’t despair, there are better ways. Most CL implementations and several Schemes (MIT/GNU Scheme and, again, scheme48 come to mind) allow you to save your complete state, at any time, in what is called and image file. This image contains a binary snapshot of the interpreter’s state, and you can reload it at any later time. Being a binary representation, it will come to life blazingly fast. Besides Lisp, Smalltalk is the paradigmatic (and possibly pioneer, but i’m not 100% sure on this) image-based language: for instance, in Squeak, the only way to launch the environment is loading a previously saved image, which contains detailed information of your previous state (including the graphical environment). In this sense (and many others), Smalltalk is a dynamic programmer’s dream come true.

Image files make things even better, but not perfect: you still need to save your state every now and then. In an ideal world, persistence should be automatic, managed behind the scenes by the system, even by the operating system. Just like the garbage collector we have come to know and love in our dynamic environments manages memory for us. This nirvana is called Orthogonal Persistence, but unfortunately, we’re not there yet. I first heard of OP from the guys of the Tunes project, where François-René Bân Rideau and friends have envisioned what i view as the ideal computing environment. Unfortunately, up to this day it remains in the Platonic realm of the ideals (but this doesn’t prevent their having one of the best online knowledge bases on computer science). Another interesting project in this regard, with actually running code that may interest the pythonistas among you, is Unununium, an operating system built around the idea of orthogonal persistence. Finally, in this context it is also worth mentioning again Alan Kay’s brainchild Squeak, which provides an environment that, without being an entire OS, in many ways isolates you into a wonderland of its own.

Tags: , , , , , ,

The Joy of REPL

Back in the old days i was a macho C++ programmer, one of those sneering at Java or any other language but C, willing to manage my memory and pointers and mystified by the complexity of the template syntax (it was difficult and cumbersome, ergo it had to be good). Everyone has a past.

Things began to change when i decided to add Guile extensibility to GNU MDK. I was using the project as an excuse to learn everything one has to learn to write free software, from parsers with flex and bison, to documentation generators like texinfo and doxygen or localisation via gettext. Next thing was scriptability, and those days Scheme was still the way to extend your GNU programs (last time i checked, GNOME was full of XML, a windows-like registry and, oh my, C#… no Scheme (or good taste) to be seen).

So, when i first encountered Scheme i was high on static type checking, object oriented programming in its narrow C++ flavour, and all that jazz. I didn’t understand immediately what was so cool about having an interpreter, and defining functions without the compiler checking their signature at every single call made me feel uneasy. I was told that i still had strong type checking in Lisp, but that it is deferred to run time, instead of at the apparently safer compile phase. I didn’t get it. Thanks god, SICP was so fun that i kept on learning, and i kept wondering for a while what was so great about interpreters and dynamic typing.

Problem was, i was writing C programs in Scheme. In a compiled language (a la C) and, to some degree, in any statically typed one, your code is dead. You write pages and pages of inert code. You compile it. Still dead. Only when you launch that binary does it come to life, only that it lives elsewhere, beyond your reach. Admittedly, i’m exaggerating: you can reach it in a convoluted way via a debugger. But still. A debugger is an awkward beast, and it will only work with the whole lot: all your program compiled, linked and whatnot.

Enter a dynamic language. Enter its REPL. When you have a, say, Lisp interpreter at your disposal you don’t write your code first and load it later (that’s what i was doing at first). You enter your code piecewise, function by function, variable by variable at that innocent looking prompt. You develop incrementally, and, at every single moment, your objects and functions are alive: you can access them, inspect them, modify them. Your code becomes an organic creature, plastic. Its almost not programming, but experimenting.

Maybe you’re raising a skeptical eyebrow. Maybe you have one of those modern visual-something debugger that lets you modify your compiled code on the fly and continue running your code using the new definitions and you think that’s what i’m talking about… Well, no, sorry, that’s only part of what i’m talking about. To begin with, you continue executing your program. I can do whatever i want. But that’s not all. We are talking about a dynamically typed language. That means that me and my little REPL have much more leeway to modify the living code, and thus much more margin to grow up and evolve the code.

At the end of the day, dynamically typed languages give me freedom. Programming is a creative process and greatly benefits from that freedom. At first, abandoning the safety net provided by static typing was a little bit scary, but as i grew up as a programmer i felt more and more confident, and gradually the initially uneasy feeling morphed into joy. The joy of REPL.

Richard P. Gabriel has made a far better job in beautifully conveying what i’m trying to express in his excellent introduction to David Lamkins’ book Successful Lisp, entitled The Art of Lisp and Writing. Unfortunately, i haven’t found it online — you can read the first few pages in amazon.com’s “look inside this book” section for this book. And also in his essay Do Programmers Need Seat Belts?. Paul Graham has famously argued in favour of bottom-up development in many of his essays, and specially in Programming Bottom-Up:

It’s worth emphasizing that bottom-up design doesn’t mean just writing the same program in a different order. When you work bottom-up, you usually end up with a different program. Instead of a single, monolithic program, you will get a larger language with more abstract operators, and a smaller program written in it. Instead of a lintel, you’ll get an arch.

Finally, please note that i’m well aware that the static vs. dynamic typing debate is still open, and that decent type systems like those in Haskell and ML have, arguably, much to offer in the way to solid software engineering. Type theory has also a powerful and beautiful mathematical foundation. The above is just my gut feeling and current position on these issues, and i don’t pretend to have backed it with solid technical argumentation. Nor was it my goal. I’m more interested here in programming as a creative activity than as engineering.

Tags: , , ,

Sketchy LISP Vol. II

Sketchy LISP Vol. II – Reference is available. From the introduction:

Sketchy is an interpreter for a purely applicative dialect of Scheme. It may be considered an implementation of pure LISP plus global definitions, first-class continuations and input/output functions. Like its first volume, this part focuses on the functional aspects of the Scheme language. While the first volume provides a step-by-step introduction to Scheme in general, this part describes the Sketchy subset as a formal system for re-writing terms. It introduces semi-formal definitions for Scheme data, programs, a small set of primitive functions, and a set of rules for reducing purely applicative expressions to normal forms.

As you can see, this second volume contains more advanced stuff, and gives you yet another opportunity of digging deeper into abstract thinking. I just ordered my copy, but the full text is available online if you’d rather save twelve bucks.

Tags: , , ,

Schemes i have seen

Recently, there have been several releases of non-mainstream Scheme systems (for whatever value mainstream has in the context of Scheme implementations). All of them have a long history and are developed by people high up in the community’s iconography. They are also very portable and, hence, you’ll probably be able to play with them, learn and take advantage of their unique features (both as a programmer and as an implementor: the sources are there in the open too).

Gambit LogoThe Gambit Scheme System, by Marc Feeley, includes an interpreter and a compiler using C as intermediate language. Among many interesting features (and the coolest logo), it has the ability of producing standalone executables, and often requested feature. (I’m personally more interested in how good are its interpreter and module system at providing a truly dynamic development environment, but that’s another story.) Gambit features an extremely efficient thread system, capable of supporting millions (sic) of concurrent processes. Another nifty feature are readtables, akin to Common Lisp’s reader macros, making it an excellent choice if you’re into the parsing business or simply want to have a little extra fun. My only nit (and the reason i’m not trying to include Gambit support into Spells) is that the support for SRFIs is relatively weak: are you looking for an interesting and fairly doable hacking project?.

Update: As it comes out, it’s better than i thought. One of the Gambit developers has pointed me to this quick and dirty, yet convenient hack that, when run in Gambit’s installation directory, will automagically install the indispensable SRFI-1.

Stklos
Stklos also released a new version this January. After being dormant for some time, development seems to progress at full steam lately. Stklos is the successor of Stk, initiated by Erick Gallesio in the early 90s, and was pointed out by RMS as “the Tcl substitute” in his famous Why you should not use Tcl flamewar. Unfortunately, the FSF later chose Guile as the Tcl substitute, instead of this very interesting implementation, whose current incarnation includes some unique offers. To begin with, Stklos features an efficient and powerful object system based on CLOS, another of my CL-envies. Also of note is that it comes with Gtk+ integration: follow the link to see some nice screenshots with source code. Personally, and with an eye on the s48-worlds project, i also find interesting its ability to install extensions downloaded from public repositories. Finally, its SRFI support is really excellent. On the downside, well, its logo is not so cool, is it?

The Larceny Project has just announced their Operation Drop-Kick release of Petit Larceny, a portable Scheme to C compiler which now supports OS X, GNU/Linux, Windows and Sparc/Solaris. The Larceny project was initiated by Will Clinger back in 1991, and has a strong focus on garbage collection and compiler optimisations. I have not really used it, but my uninformed opinion is that Larceny will be mainly useful to those of you doing research (or with a strong interest in) those areas.

Tags: ,

Follow

Get every new post delivered to your Inbox.

Join 42 other followers