A Haskell bookshelf

My favourite scheme implementation is scheme48, which takes its name from its being initially implemented in 48 hours by Richard Kelsey and Jonathan Rees in August 1986. They used Common Lisp on a Symbolics 3600 and Maclisp on a PDP-10.

Now, thanks to Jonathan Tang’s tutorial, you can write yourself a scheme in 48 hours, but using Haskell instead of Lisp. The tutorial is intended as an introduction to Haskell for Schemers (and (brave) programmers in general) wanting to get started in the purest member of the functional family. And it is extremely fun, for instead of following the boring language features overview way, it plunges right on in interesting, hands-on stuff like parser combinators (one of the most beautiful applications of Haskell, if you ask me) or monads. The lessons include exercises, so that you have really no excuse for not learning Haskell once and for all. Believe me, it’s good for your (mental) health.

If you’re totally new to Haskell and find the pace of Jonathan’s tutorial a bit hard to follow, you may start with a tour of the Haskell syntax and the Hitchikers’ guide to Haskell, after making sure, of course, that you have your towel at hand. As for me, well i’ve been planning to learn Haskell for some years now and, since i happen to love books, i have accumulated a little (but selected) Haskell library, together with a sort of learning roadmap that, unfortunately, i have yet to complete. Just in case the above links whetted your appetite and you happen to have more time than i do, here you have my bookworm guide to Haskell enlightenment.

The little Haskeller

There are many introductory Haskell books, but i was (at the time i started learning) keen of those providing also a good basis on functional programming (which was a totally new world for me at the time). Therefore, my first Haskell book, and still a wholehearted recommendation, was Richard Bird and Philip Wadler‘s Introduction to Functional Programming Using Haskell, one of the most elegant books on programming i’ve ever read.

One of the nicest things about writing Haskell code is that it’s the closest one can probably get to writing pure maths while programming. Thus, learning Haskell is an excellent way to learn more about maths and logic. If that sounds to you like doubling the fun (and you’re not yet a math wizard), The Haskell Road to Logic, Maths and Programming (see also this extensive review (PDF)) by Jan van Eijck and Kees Doets is definitely the book for you. Its purpose is to teach logic and mathematical reasoning in practice, and to connect logical reasoning with Haskell programming. And, in my opinion, it does a pretty good job at that. Although it begins with the very basics, it includes chapters on far from trivial (i.e., fun!) stuff like, for instace, corecursion or Cantor sets. And i found amazing how natural it was to express logical and mathematical ideas in Haskell.

But maybe you prefer to learn a bit of applied Computer Science, instead of abstract maths, with Haskell. No problem: just grab a copy of Fethi Rabhi and Guy Lapalme‘s Algorithms, a functional programming approach, which challenges the academic establishment by teaching algorithms using Haskell, that is, in a purely functional context. Revisiting classical (and apparently imperative) algorithms like sorting, tree and graph traversal or dynamic programming methods under a functional light is a refresing experience, and an eye-opener: the fact that all the typical algorithms are covered in just 256 pages is a testament to the authors’ claim that functional programming leads to smaller, clearer and more elegant program. Besides, as you probably know, Haskell features lazy evaluation, which poses entirely new (and pretty instructive) challenges when it comes to evaluating the efficiency of an algorithm.

The seasoned haskeller

fopOnce you’re comfortable with the language and have learnt all about monads, the best thing to do is to study non-trivial Haskell applications. To that end, you can hardly find a better book than The fun of programming, a Festschrift celebrating Richard Bird’s sixtieth birthday. Including thirteen chapters by luminaries in the field, this book describes fun applications on such fun things as musical composition or graphical design, as well as covering advanced programming techniques such as data structures design, interpreters or optimization. Amazing.

I also own a copy of Paul Hudak‘s The Haskell School of Expression, which is often recommended as as Haskell primer. In my opinion, it goes too quickly into the nitty-gritty details of (music and multimedia) implementations to be a good first book, but it is probably a good second book on Haskell programming. But that’s just me: as you can read in the review above, some people have diverging opinions.

The fun never ends…

After all these readings, and a bit more about monads, you’ll begin to feel an expert, and may be interested in more advanced stuff. I’ve got a couple of books reserved for that moment. The first one is The Algebra of Programming, by Richard Bird and Oege de Moor, and its purpose, simply stated, is to show how to calculate programs algebraically using and old friend of ours: category theory, including the bananas that will be the main theme of my second post on these matters. There you’ll find and in-depth, abstract treatment of algorithmic strategies with an eye of correctness proofs. Although heavy on theory, there are interesting applications of this algebra of programming, like the conversion between binary and decimal numbers in TeX, the best way of drawing a cylinder in LaTeX or good data compression algorithms. A slight nuisance is that this book uses an old Haskell dialect (a precursor, actually) called Gopher, but the conversion to modern notation is pretty straightforward.
Finally, anyone serious about algorithms and data structures should read Chris Okasaki’s Purely Functional Data Structures, and amazing tour on advanced algorithimcs using functional programming languages, with source code in SML and Haskell. The exposition is extremely elegant and synthetic, and i’ve spent many, many hours immersed in its 220 pages (i can’t think of any other programming book with a better signal to noise ratio, really).

That’s it. I just hope that 48-hours days are invented any time soon. Happy reading!

Moo-oriented programming

I rarely play computer games, probably because i find programming so fun (or maybe just because i’m a dull boy). Therefore, this recent Wired News article was a total surprise to me. It reviews playsh, a pretty interesting collaborative programming environment.

If you’re not as dull as i am, you’ll already know about MUDs and MOOs, the popular distributed role-playing platforms. Playsh substitutes the grues and spells of your typical MOO by whatever program code you’re working on. Actually, not only you, but any other coder connected to your server. The idea of wondering around rooms where you find your programs objects and APIs and pick and modify them possibly in collaboration with other programmers in the room is quite amusing. We are just taking the living environment of dynamic languages one step further, making it collaborative.

Playsh is Matt Webb‘s (of Mind Hacks fame) brainchild, and he has already released a Python-based proof-of-concept implementation developed in collaboration with Ben Cerveny. Documentation is still scarce, but this blog entry of Matt’s gives a good overview of his goals and plans for the future, and there’s also some info around on Matt and Ben’s recent Etech session on playsh. Also worth reading are Matt’s ideas on the history of physics and the future of computing and his essay on modernity and protest, which provide the intellectual background that has led Matt to playsh.

Playsh depens on quite a few external modules and installing it is currently a bit of a chore, but if you get it running you’ll have a text-based interface that allows coding network objects (accessed via standard web protocols like RSS or HTTP) as you wander around its MUD-like rooms together with any other player, er, programmer. Each object you encounter has a set of shared properties (akin to instance variables) and verbs (methods on them), the latter being user specific. To modify and add verbs or objects, one drops from the navigation environment into and interactive Python interpreter. The nifty thing is that not only the objects, but the interpreter itself is shared among programmers: you can see your colleagues typing their code and have your say on it!

Interesting as they are, these ideas are by no means completely new, as any Croquet user will tell you. But still.

Technorati Tags: , , , ,

Programmers go bananas

Introduction: lists galore

I learned programming backwards, plunging right on into C and, shortly after, C++ and Java from the very beginning. I was knee deep in complex data structures, pointers and abstruse template syntax in no time. And the more complex it all felt, the more i thought i was learning. Of course, i was clueless.

Reading SICP and learning about functional programming changed it all. There were many occasions for revelation and awe, but one of my most vivid recollections of that time is my gradual discovery of the power of simplicity. At about half way into SICP i realised in wonder that that beautiful and powerful world was apparently being constructed out of extremely simple pieces. Basically, everything was a list. Of course there were other important ingredients, like procedures as first-class objects, but the humble list was about the only data structure to be seen. After mulling on it for a little bit, i saw where lists draw their power from: recursion. As you know, lists are data types recursively defined: a list is either the empty list or an element (its head) followed by another list (its tail):

list = []
list = a : list

where i’m borrowing Haskell’s notation for the empty list ([]) and the list constructor (:), also known by lispers as () and cons. So that was the trick, i thought: lists have recursion built-in, so to speak, and once you’ve read a little bit about functional programming you don’t need to be sold on the power and beauty of recursive programs.

It is often the case that powerful and beautiful yet simple constructs have a solid mathematical foundation, and only when you grasp it do you really realize how powerful, beautiful and amazingly simple that innocent-looking construct is. Lists, and recursive operations on them, are an excellent case in point. But the path connecting them to their mathematical underpinnings is a long and winding one, which lays in the realm of Category Theory.

I first became acquainted of the relationship between categories and recursive programming reading Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire, by Erik Meijer, Maarten Fokkinga and Ross Paterson. Albeit very enjoyable, this paper presupposes a high degree of mathematical sophistication on the reader’s side. I will try in this article to give you a simplified overview of the concepts involved, including Category Theory, its application to programming languages and what funny names like catamorphism, anamorphism or lambda-lifting have to do with your everyday list manipulations. Of course, i’ll be only scratching the surface: interspersed links and the Further reading section provide pointers to more in-depth explorations of this wonderland.

Categorical interlude

CategoriesCategories are (relatively) simple constructs. A category consists of a set O of objects, and a set A of arrows between elements of O. Arrows are composable: if there’s an arrow from a to b, and another one from b to c, there must be an arrow from a to c (where a, b and c are elements of O). Besides, they are associative: if you have arrows from a to b, b to c, and c to d, you can go from a to d via two different paths, namely, first from a to c and then from c to d, or first from a to b and then from b to d. Finally, for each element a in O there’s an identity arrow which goes from a to itself (called an identity), such that following this arrow changes nothing. These properties are better visualized with a diagram (or a bit of mathematical notation), as shown in the image on the right.

A category captures a mathematical world of objects and their relationships. The canonical example of a category is Set, which contains, as objects, (finite) sets and, as arrows, (total) functions between them. But categories go far beyond modeling sets. For instance, one can define a category whose objects are natural numbers, and the ‘arrows’ are provided by the relation “less or equal” (that is, we say that there is an arrow joining two numbers a and b if a is less or equal than b). What we are trying to do with such a definition is to somehow capture the essence of ordered sets: not only integers are ordered but also dates, lemmings on a row, a rock’s trajectory or the types of the Smalltalk class hierarchy. In order to abstract what all those categories have in common we need a way to go from one category to another preserving the shared structure in the process. We need what the mathematicians call an isomorphism, which is the technically precise manner of stating that two systems are, in a deep sense, analogous; this searching for commonality amounts to looking for concepts or abstractions, which is what mathematics and (good) programming is all about (and, arguably, intelligence itself, if you are to believe, for instance, Douglas Hofstadter‘s ideas).

To boot, our definition of a category already contains the concept of isomorphic objects. Think of an arrow from a to b as an operation that transforms a in b. An arrow from b to a will make the inverse transformation. If composing both transformations gives you the identity, you are back to the very same object a, and we say that a and b are isomorphic: you can transform one into the other and back at will. In a deep sense, this concept captures a generic way of expressing equality that pervades all maths: if you’re not afraid of a little bit of maths, Barry Mazur‘s essay When is a thing equal to some other thing? is an excellent introduction to Category Theory with an emphasis in the concept of equality. Among many other things, you will learn how the familiar natural numbers can be understood as a category, or how an object is completely defined by the set of its transformations (and, therefore, how to actually get rid of objects and talk only of transformations; i know this is stretching and mixing metaphors (if not plain silly), but this stress in arrows, as opposed to objects, reminded me of Alan Kay’s insistence on focusing on messages rather than objects). Another introductory article with emphasis on categories as a means to capture sameness is R. Brown and T. Porter’s Category Theory: an abstract setting for analogy and comparison.

Not only objects inside a category can be transformed into each other. We reveal the common structure of two disjoint categories by means of a functor mapping across two categories. A functor consists of two functions: one that maps each object of the first category to an object in the second, and another one putting in correspondence arrows in one category with arrows in the second. Besides, these functions must preserve arrow composition. Let me spell this mathematically. Consider to categories, C and C’ with object sets O and O’ and arrow sets A and A’. A functor F mapping C to C’ will consist then of two functions (Fo, Fa); the first one taking elements of O to elements of O’:

Fo: O -> O’

Fo(a) in O’ for every a in O

and the second one taking arrows from A to arrows in A’:

Fa: A -> A’

Fa(f) in A’ for every f in A

and such that, if f is an arrow from a to b in C, Fa(f) is an arrow from Fo(a) to Fo(b) in C’. Moreover, we want that following arrows in C is ‘analogous’ to following them in C’, i.e., we demand that

Fa(fg) = Fa(f)Fa(g)

In the left hand side above, we are composing two arrows in C and then going to C’, while in the right hand side we first take each arrow to C’ and, afterwards, compose them in there. If C and C’ have the same structure, these two operations must be equivalent. Finally, F must preserve identities: if i is the identity arrow for an element a in O, Fa(i)must be the identity arrow for Fo(a) in O’. The diagram on the left shows a partial graph (i’m not drawing the identity arrows and their mappings) of a simple functor between two categories, and ways of going from an object a in the first category to an object x in the second one which are equivalent thanks to the functor’s properties.

As you can see in this simple example, the functor gives us the ability of seeing the first category as a part of the second one. You get a category isomorphism in the same way as between objects, i.e., by demanding the existence of a second functor from C’ to C (you can convince yourself that such a functor does not exist in our example, and, therefore, that the two categories in the diagram are not isomorphic).

You have probably guessed by now one nifty property of functors: they let us going meta and define a category whose objects are categories and whose arrows are functors. Actually, Eilenberg and MacLane‘s seminal paper General theory of natural transformations used functors and categories of categories to introduce for the first time categories (natural transformations are structure-preserving maps between functors: this Wikipedia article gives an excellent overview on them).

But enough maths for now: it is high time to show you how this rather abstract concepts find their place in our main interest, programming languages.

Categories and programming languages

About the only similarity between C and Haskell programming is that one spends a lot of time typing ASCII arrows. But of course, Haskell’s are much more interesting: you use them to declare the type of a function, as in

floor:: Real -> Int

The above stanza declares a function that takes an argument of type real and returns an integer. In general, a function taking a single argument is declared in Haskell following the pattern

fun:: a -> b

where a and b are types. Does this ring a bell? Sure it does: if we identify Haskell’s arrows with categorical ones, the language types could be the objects of a category. As we have seen, we need identities

id:: a -> a
id x = x

and arrow composition, which in Haskell is denoted by a dot

f:: b -> c
g:: a -> b
fg:: a -> b -> c
fg = f . g

Besides, associativity of arrow composition is ensured by Haskell’s referential transparency (no side-effects: if you preserve referential transparency by writing side-effect free functions, it won’t matter the order in which you call them): we’ve got our category. Of course, you don’t need Haskell, or a statically typed language for that matter: any strongly typed programming language can be modelled as a category, using as objects its types and as arrows its functions of arity one. It just happens that Haskell’s syntax is particularly convenient, but one can define function composition easily in any decent language; for instance in Scheme one would have

(define (compose f g) (lambda (x) (f (g x)))

Functions with more than one arguments can be taken into the picture by means of currying: instead of writing a function of, say, 2 arguments:

(define (add x y) (+ x y))
(add 3 4)

you define a function which takes one argument (x) and returns a function which, in turn, takes one argument (y) and returns the final result:

(define (add x) (lambda (y) (+ x y)))
((add 3) 4)

Again, Haskell offers a pretty convenient syntax. In Haskell, you can define add as:

add x y = x + y

which gets assigned, when applied to integers, the following type:

add:: Int -> (Int -> Int)

that is, add is not a function from pairs of integers to integers, but a function that takes an integer and returns a function of type Int -> Int. Finally, we can also deal with functions taking no arguments and constant values by introducing a special type, called unit or 1 (or void in C-ish), which has a unique value (spelled () in Haskell). Constants of our language (as, e.g., True or 43.23) are then represented by arrows from 1 to the constant’s type; for instance, True is an 1 -> Boolean arrow. The unit type is an example of what in category theory is known as a terminal object.

Now that we have successfully modelled our (functional) programming language as a category (call it C), we can use the tools of the theory to explore and reason about the language constructs and properties. For instance, functors will let me recover the original motivation of this post and explore lists and functions on them from the point of view of category theory. If our language provides the ability to create lists, its category will contain objects (types) of the ‘list of’ kind; e.g. [Int] for lists of integers, [Boolean] for lists of Booleans and so on. In fact, we can construct a new sub-category CL by considering list types as its objects and functions taking and returning lists as its arrows. For each type a we have a way of constructing a type, [a] in the sub-category, i.e., we have a map from objects in C to objects in CL. That’s already half a functor: to complete it we need a map from functions in C to functions in CL. In other words, we need a way to transform a function acting on values of a given type to a function acting on lists of values of the same type. Using the notation of the previous section:

Fo(a) = [a]
Fa(f: a -> b) = f': [a] -> [b]

Fa is better known as map in most programming languages. We call the process of going from f to f' lifting (not to be confused with a related, but not identical, process known as lambda lifting), and it’s usually pretty easy to write an operator that lifts a function to a new one in CL: for instance in Scheme we would write:

(define (lift f) (lambda (lst) (map f lst)))

and for lift to truly define a functor we need that it behaves well respect to function composition:

(lift (compose f g)) = (compose (lift f) (lift g))

We can convince ourselves that this property actually holds by means of a simple example. Consider the function next which takes an integer to its successor; its lifting (lift next) will map a list of integers to a list of their successors. We can also define prev and (lift prev) mapping (lists of) integers to (lists of) their predecessors. (compose next prev) is just the identity, and, therefore, (lift (compose next prev)) is the identity too (with lifted signature). But we obtain the same function if we compose (lift next) and (lift prev) in CL, right? As before, there’s nothing specific to Scheme in this discussion. Haskell even has a Functor type class capturing these ideas. The class defines a generic lift operation, called fmap that, actually, generalizes our list lifting to arbitrary type constructors:

fmap :: (a -> b) -> (f a -> f b)

where f a is the new type constructed from a. In our previous discussion, f a = [a], but if your language gives you a way of constructing, say, tuples, you can lift functions on given types to functions on tuples of those types, and repeat the process with any other type constructor at your disposal. The only condition to name it a functor, is that identities are mapped to identities and composition is preserved:

fmap id = id
fmap (p . q) = (fmap p) . (fmap q)

I won’t cover usage of type constructors (and their associated functors) other than lists, but just mention a couple of them: monads, another paradigmatic one beautifully (and categorically) discussed by Stefan Klinger in his Programmer’s Guide to the IO Monad – Don’t Panic (also discussed at LtU), and the creation of a dance and music library, for those of you looking for practical applications.

To be continued…

Returning to lists, what the lifting and categorical description above buys us is a way to formalize our intuitions about list operations, and to transfer procedures and patterns on simple types to lists. In SICP’s section on Sequences as conventional interfaces, you will find a hands-on, non-mathematical dissection of the basic building blocks into which any list operation can be decomposed: enumerations, accumulators, filters and maps. Our next step, following the bananas article i mentioned at the beginning, will be to use the language of category theory to provide a similar decomposition, but this time we will talk about catamorphisms, anamorphisms and similarly funny named mathematical beasts. What the new approach will buy us is the ability to generalize our findings beyond the list domain and onto the so-called algebraic types. But this will be the theme of a forthcoming post. Stay tunned.

Further reading

The best introductory text on Category Theory i’ve read is Conceptual Mathematics : A First Introduction to Categories by F. William Lawvere (one of the fathers of Category Theory) and Stephen Hoel Schanuel. It assumes no sophisticated mathematical background, yet it covers lots of ground. If you feel at home with maths, the best option is to learn from the horse’s mouth and get a copy of Categories for the Working Mathematician, by Mac Lane.

The books above do not deal with applications to Computer Science, though. For that, the canonical reference is Benjamin Pierce’s Basic Category Theory for Computer Scientists, but i find it too short and boring: a far better choice is, in my opinion, Barr and Well’s Category Theory and Computer Science. A reduced version of Barr and Well’s book is available online in the site for their course Introduction to Category Theory. They are also the authors of the freely available Toposes, Triples and Theories, which will teach you everything about monads, and then more. Marteen Fokkinga is the author of this 80-pages Gentle Introduction to Category Theory, with a stress on the calculational and algorithmical aspects of the theory. Unless you have a good mathematical background, you should probably take gentle with a bit of salt.

Let me close by mentioning a couple of fun applications of Category Theory, for those of you that know a bit about it. Haskell programmers will like this implementation of fold and unfold (as a literate program) using F-(co)algebras and applied to automata creation, while those of you with a soft spot for Physics may be interested in John Baez’s musings on Quantum Mechanics and Categories.

Tags: , , , ,

The clones strike back

Back at Ozone‘s you can find a crash introduction to Io, a relatively new prototype-based language. As explained in its official site:

Io is a small, prototype-based programming language. The ideas in Io are mostly inspired by Smalltalk (all values are objects), Self (prototype-based), NewtonScript (differential inheritance), Act1 (actors and futures for concurrency), LISP (code is a runtime inspectable/modifiable tree) and Lua small, embeddable).

Besides running in all the usual platforms, and some not so usual ones like Symbian and Syllable, it offers an interesting set of libraries including sockets, databases, OpenGL, some crypto APIs or, notably, and Objective-C bridge.

Io’s syntax, if anything, is smalltalkish, and claims to be as simple as it takes: it has no keywords! It also features decent performance, sometimes not much worse that Python’s or Ruby’s.

If you’re interested in prototype-based languages, and want to try something simpler than Slate or newer than Self, Io looks like an option worth considering. Besides, applications like this one seem to point to a relatively mature language.

On a loosely related note, schemers interested in prototypes (like myself) may find Jorgen Schäfer’s Prometheus an excellent way to get acquainted with this fascinating subject, and maybe spend a couple of fun evenings implementing selfish patterns (as explained in the article by Brian Foote that gives name to this post) in Scheme.

Happy cloning!

Technorati Tags: , , ,

Dynamic languages day

The presentations and videos of the Dynamic Languages Day held last month at Vrije Universiteit are available at the event’s web site, which also counted with the collaboration of the Belgian Association for Dynamic Languages, whose home page starts with a call to arms: No Java, C++, Pascal or C here!

Prof. Viviane Jonckers gives an overview on the university’s use of Scheme as the lingua franca for most computer science courses in the first two years, stressing how beneficial to the students is an early exposure to the dynamic paradigm (PDF / Quicktime). A good quick overview of the features that make Scheme different.

No dynamic languages discussion could oversee Smalltalk. Johan Brichau and Roel Wuyts talked about the language’s features (PDF), including its metaobject protocol and flexible development environment, which so nicely puts to good use the dynamic nature of Smalltalk (as demonstrated in the video).

Ellen Van Pasesschen
reviewed the history and main traits of prototype-oriented languages before entering in a tutorial on Self’s strengths and efficiency issues (you’ll be surprised). Her presentation is real fun, and worth to see live. Ellen uses Self extensively in her Ph.D. dissertation, which includes a sophisticated interactive environment (called SelfSync) that creates object networks from Entity-Relationship diagrams, and keeps two-way synchronisation between objects and their ER model: just take a look at this demo to see what i mean.

Finally, Pascal Costanza gave a life demonstration of Common Lisp’s Object System and its metaobject protocol, and how it can be used to implement Aspect-Oriented-like features into Lisp. The talk is targetted at non-specialists, and is full of side-notes explaining CL idiosincrasies and questions from the audience, so that you can learn some Lisp on the way even if it’s not your language. The live part of the talk includes implementing a Python-like object system using the MOP, so be sure to watch the video in addition to downloading  the PDF slides and source code. If you know a little bit about CLOS, or get impressed enough with Pascal’s wizardry, don’t miss Closer, a set of libraries written by Pascal that take CL implementations closer to the MOP specification, and applies it to AOP and what Pascal calls Context-oriented Programming, or ContextL. Since i have yet to really understand what exactly are ContextL’s goals, i’d better just point you to the overview paper i’m currently reading.

Wow, i wish i was there!

Technorati Tags: , , , , , , , ,

Language wars

I don’t like language wars, for obvious reasons. But i just stumbled upon one that contains certain interesting bits. In particular, you can go right to the last post of Steve Yegge (the war’s initiator, in response to an Eckel’s provocation) and forget about the war: the article makes a series of interesting points when it comes to compare languages in their proper context. Here the war is between Python and Ruby, two languages which i know only superficially, so i will let you draw your own conclusions. But in that last article, you’ll find some interesting musings about Smalltalk’s (greatly exaggerated) death, what makes a language popular (are really the managers? or is it the engineer’s fault?), or the convenience of beign a polyglot. There are many good points. Nevertheless, i find Steve’s real reason to prefer Ruby really dissapointing:

But Ruby’s my favorite (as in “preferred”) language now because I can see the trajectory it’s on, especially now with Rails, and I believe it’s going to be the Next Big Thing — a Java-like phenomenon.

Many years ago, a person that i respect a lot gave me a piece of advice that i’ve tried to follow since then: never judge anything’s goodness or technical merits by its popularity. A much better metric when judging a language is, for instance, how much you can learn from it (a point beautifully made, for instance, in Jared Updike’s Learning from Haskell). In my opinion, Steve is falling precisely in the trap he’s denouncing. Still a good read, though.

Peter Naur

Peter NaurPeter Naur has been awarded this year’s ACM Turing Award. He began his career as an astrophysicist, but soon after his Ph.D. became involved in computer science, more concretely in the definition of the Algol 60: he was the editor of the 1960 Report on the Algorithmic Language Algol 60 in which the famous BNF notation for computer language grammars was introduced, as an improvement of the one used a year before by John Backus in a report on Algol 58. Nowadays, BNF belongs into every programmer toolset, not to mention the fact that Algol is the grandparent of all languages in the (syntactical) C family.

In 17 pages, Algol 60 introduced call by value and call by name, code blocks, local variables, two-branch conditionals, the for statement and dynamic arrays (if you’ve never seen Algol code before, here’s how the Towers of Hanoi look like in Algol 60). Not bad for a forerunner, but not so surprising if you read the names of its authors, which include, for instance, J. McCarthy and Alan Perlis.

In his own Turing Award conference (or PDF of the original manuscript), Edsberg Dijkstra mentioned Algol, its report and Naur in these terms:

[...] while the definition of LISP is still a curious mixture of what the language means and how the mechanism works, the famous Report on the Algorithmic Language ALGOL 60 is the fruit of a genuine effort to carry abstraction a vital step further and to define a programming language in an implementation-independent way. One could argue that in this respect its authors have been so successful that they have created serious doubts as to whether it could be implemented at all…

Naur programmed an Algol compiler for a Danish computer called Gier later on. No wonder Dijkstra considered it a masterpiece.

As you can see in this commented bibliography, Naur has also made many contributions in the field of software engineering methodologies and life-cycle models. But afterwards he became more and more interested in less practical stuff.

February’s top ten

These are the ten most visited posts of February:

  1. Beyond mainstream object-oriented programming
  2. Continuation kata
  3. The Art of Lisp & Writing
  4. The Joy of REPL
  5. Scheme code kata
  6. More on exotic input methods
  7. The Lisp Dictionary
  8. As simple as possible…
  9. The Art of Metaprogramming Using Scheme
  10. Dynamic OS X Programming

Thanks again to all of you reading and commenting, with a special mention to Bill’s kind words. February has been a little less productive than January in number of articles, although this is in part due to longer entries (and in part to real life interferences).

Follow

Get every new post delivered to your Inbox.

Join 26 other followers