Wednesday, October 31, 2007





I want to tell you about my new hobby. I've started to collect inferno emulators. It's a little like trainspotting where the aim is to "spot" a certain type of emulator compiled to run for a particular platform. You get points for running the emulator yourself and bonus points for building the emulator yourself.

Unlike trainspotting, I've come to think this hobby might not be completely pointless.

So far my collection is not large. I can only claim to have run hosted emu on Plan9, MacOSX, Linux, Solaris, and Nt. Even so, I'm proud of it and would like to add more hosts. Imagine for a moment a collection of emulators that runs on every host, past and present, and with the social organization to port it to all significant future hosts.

Wouldn't such a collection be valuable to others? For example, say you are interested in digital preservation, if you could write a program in limbo to interpret your special file format this collection would reduce the risk of that program one day not being able to execute.

The potentially vast expanse of this collection reminds me of Brian Eno's term "big here, long now." ( The small here is the desktop PC we use today, and the short now is the next few years we'll be using it. Small here, short now thinking is when we only consider our own desktop for its useful life and nothing beyond it. The big here is all available computing platforms, and long now is all computing platforms past and hundreds of years in the future.

Only few applications have the potential for such cross platform collections, usually by virtue of C being their implementation language. C and Perl must be two of the most ported programs. Java maybe. But none of these programs can you download a binary collection for all possible hosts.

There is room here for Inferno to stake the claim for being the most widely ported program including a collection of all binaries in one place.

The cdinstall.iso from contains emus for FreeBSD, Irix, Linux, MacOSX, and Nt.

So should this be sold to application developers to use as platform for long term retrieval of file formats? Probably not. The main concern for digital preservation is to write platform emulators, along the lines of simh, qemu, JPC, and MESS. If these emulators were targeted to run on Dis then Inferno would fill the space as universal emulator.

Ironically, programmers writing for most of the machines emulated by SIMH or the games consoles in MESS were not thinking of portability. Yet now that binary code is runnable in perpetuity by virtue of the fact the computing power has advanced so much that it is not difficult to emulate the original hardware.

The advice for the general developer is just to target any significant, popular, current hardware platform, and eventually it will get emulated and the software will be ported via simulation.

This collection may or may not fill the role of universal emulator, but a substantial collection must exist first as evidence that it has at least that potential. However, this alone is not the reason to do this work, because the act of building the collection, the trainspotting, is enough of an interesting pastime to make such a collection exist.


Friday, October 19, 2007

lab 80 - drawterm plugin


lab 80 - drawterm plugin


This post is to show it is possible to use the inferno IE plugin to drawterm to plan9. All the programs involved are part of standard Inferno.


These are the steps I did to get this running locally. It does need factotum etc, so you need to export a file system with all the pieces you need.

 % listen -vA 'tcp!*!7070' {export '#U/'}

For the net you'd probably do this using an unauth'd readonly kfs, for example. Then you need a drawterm script

  % cat /drawterm

bind -b /n/remote/dis /dis
bind -a /n/remote /
bind /n/remote/lib /lib
mkdir /n/remote/n/9win
bind -a /n/remote/n /n
load std
getlines < /n/remote/factotum  {echo $line} > /mnt/factotum/ctl
bind /n/remote/ndb.local /lib/ndb/local
9cpu -h tcp!fir -r -c 'bind -b /mnt/term/n/9win /dev; bind -a
/mnt/term/dev /dev; exec rio'

And then you need the HTML

width="800" height="600">
<PARAM name="init" value="/dis/sh.dis -c 'mount -A
tcp!!7070 /n/remote;  /n/remote/drawterm ' ">

The next steps would be to use secstore to load factotum instead of using he unencrypted file containing the keys. And you'd also need something to prompt for the name and password.

Wednesday, July 18, 2007

lab 79 - acme javascript


lab 79 - acme javascript


I'm excited about today's lab. I hope others pick up on this and experiment with it because I think some cool acme clients would come of it.

The idea behind this lab is to mix together acme, javascript and json web services.

I've been poking around at inferno's javascript, hoping to improve it. The best way of doing that is to start using it more heavily. In earlier labs I created a tool called js that ran javascript scripts outside of charon. But without knowing what set of host objects to build it has languished.

Looking to use javascript more, I've been taking another look at web services APIs, and noticing that JSON is getting strong support, especially from Google and Yahoo. I'm pleased about this, since the SOAP stuff looked so horrid. So I really want to pull JSON web services into inferno using javascript. But web services don't work too well when text is just output to the command line, they need more interaction.

So the natural thing to do is build an acme client. Acme clients can be built using shell but I don't think that is ideal. Inferno shell can really show its limits when used as a programming language. And for new users javascript is probably an easier and more familiar language to get to grips with.

I decided in this lab to create a javascript command that includes acmewin as a host object. I also added a host readlUrl function meant for calling json services. Together I hope this makes it really simple for anyone to put together an acme client using data pulled from the internet.

The command is called Jwin and is envoked with the name of a javascript file.

% Jwin -f file.js

It opens one window for the script to operate on.

I created an Acmewin host object, that is instantiated before the script is run. It has methods that mirror the acmewin library, read, writebody, tagwrite, name, clean, select, setaddr, replace, readall.

It also has two event properties, onlook and onexec, which should be assigned functions if you want to react to mouse clicks of the middle and right buttons.

Here's the simplest javascript client.

Acmewin.onexec = function(x) {
 Acmewin.writebody("onexec:" + x + "\n");

Acmewin.onlook = function(x) {
 Acmewin.writebody("looking at:" + x + "\n");

I still need to write a postUrl method. I'd like a blogger interface myself. Even though most google APIs allow read access using JSON they only allow posts in Atom format. But given the Javascript object it shouldn't be hard to build the xml string for the post.

The web access is through svc/webget/webget, so this should be started before using Jwin.

Below is a longer example to give you more of a flavor of an acmeclient written in javascript. Try this out and see how fast you can come up with you own acme client.

var undefined;
var user = "caerwyn";
var baseurl = "";
var count = "?count=20";

var tagmode = false;
function getfeeds(tag)
 if(tag != undefined){
  var s = System.readUrl(baseurl + user + "/" + tag);
 }else {
  var s = System.readUrl(baseurl + user + count);
 for (var i =0, item; item = Delicious.posts[i]; i++) {
  Acmewin.writebody(i + "/\t" + item.d + "\n");

function gettags()
 var s = System.readUrl(baseurl + "tags/" + user);
 for(var i in Delicious.tags){
  Acmewin.writebody(i + "(" + Delicious.tags[i] +")\n");

Acmewin.tagwrite(" Posts Tags ");"");

Acmewin.onexec = function(cmd) {
 if(cmd == "Posts"){
  Acmewin.replace(",", "");
  tagmode = false;
  return true;
 }else if(cmd == "Tags"){
  Acmewin.replace(",", "");
  tagmode = true;
  return true;
 return false;

Acmewin.onlook = function(x) {
 var n = parseInt(x);
 if(n >=0 && n < Delicious.posts.length){
  Acmewin.writebody(Delicious.posts[n].u + "\n");
  return true;
  Acmewin.replace(",", "");
  tagmode = false;
  return true;
 return false;

And here's what the client looks like:



lab 79 code

Sunday, July 01, 2007

lab 78 - dynamic dispatch


lab 78 - dynamic dispatch


While experimenting with creating my own alphabet-like interfaces I found this technique, which I think is fascinating, and I hope to use it soon in further experiments with persistent mounts, something I'll blog about in the future.

Here's a definition of a pick ADT that has some primitive types (I've left out the complete set to keep it short), and some helper functions to get the value from the pick.

Value: adt {
 getfd: fn(v: self ref Value): ref Sys->FD;
 gets: fn(v: self ref Value): string;
 send: fn(v: self ref Value, r: ref Value);
 pick {
 S =>
  i: string;
 F =>
  i: ref Sys->FD;
 O =>
  i: chan of ref Value;

The thing to notice here is the recursive definition of the ADT. We don't need to define chan of string or chan of FD. The Value.O type is a channel that can handle anything of ref Value, all our primitive types including ... chan of ref Value.

So given a v : ref Value, we'd get, say, the file descriptor as follows,

 fd := v.getfd();

The pick value might already be Value.F in which case we get the file descriptor directly. On the other hand, it might be a channel, so we request the value from the channel. This is hidden away in the getfd() function so the caller doesn't know where the value will come from.

A channel is something that can bind our process to another process. This technique permits us to perform a kind of dynamic dispatch for a name, where at runtime we call a process that will supply us a value.

This is how getfd() is implemented:

Value.getfd(v: self ref Value):ref Sys->FD
 pick xv := v{
 O =>
  replyc := chan of ref Value;
  xv.i <-= ref Value.O(replyc);
  return (<-replyc).getfd();
 F =>
  return xv.i;
 raise typeerror('f', v);

Let's walk through that code. As I said, if the value is a pick type of F we already have a file descriptor and return that. If it's a channel we create another channel of ref Value and send that down the channel to request a Value from another process, which should be waiting at the other end. We then call getfd() recursively on the value we receive from the reply chan.

Yes, there's recursion again. The process we are requesting a value from could send us a channel to another process, and if it did so we'd repeat the transaction.

Note, if we ever get the wrong type we just throw a type error.

There is a general protocol here that all processes follow that take part in this transaction.

When a process is started it is passed a request channel from which it should wait to receive a reply chan. It should then do its job and send the result down the reply chan.

The process might also be passed ref Values as arguments, which could be bound to other processes and so on.

Here's an example expression that we want evaluated using this technique,

% xy mount {styxpersist {auth {dial tcp!host!styx}}}

Every module in that expression would implement the same interface, something like this,

Xymodule: module {
 init: fn();
 run: fn(request: chan of ref Value, args: list of ref Value);

Every module gets launched with an already created channel and is spawned its own process.

runcmd(..., cmd: string, args: list of ref Value): ref Value
 m := loadmodule(cmd);
 req := chan of ref Value;
 spawn m->run(req, opts, args);
 return ref Value.O(req);

And finally, every process would follow a template like this,

run(req: chan of ref Value, args: list of ref Value)
 while((replyc :=<-req) != nil){
  # do some work to create  value

Lets step through the shell expression to see what is happening.

Mount requests an FD from its first argument, in this case styxpersist. Mount sends the reply chan and waits. Mount will exit once it's done, but styxpersist and the other processes will need to carry on running. Styxpersist requests a FD from auth, which requests one from dial. Dial creates the FD from dialing the remote host and returns it to auth, which authenticates on that FD and returns it to styxpersist.

Styxpersist relays bytes between the mount point and the file descriptor it obtained from its argument. If the connection closes it will request another file descriptor, and re-attach to the styx service. In this way we have a persistent connection.

All that we are passing around is channels and file descriptors. But this approach is very flexible, since inferno handles resources as files.

The channels also allow us a great deal of flexibility with the binding of a name to its value.

Lets review what the channels and the ADT described above let us do.

Given a value, the code bound to a method call on that value is determined at run time. The code bound to getfd() is dynamically dispatched. The similarity between this and OOP, especially smalltalk, is quite strong.

A lot of the object oriented techniques in smalltalk fall out from the method of dynamic dispatch used by all objects. "Sharing and reuse mechanisms (such as delegation) are not part of the object model per se but can be added simply by overriding the 'lookup' operation". (Kay et al [PDF])

There are two ways we are supporting dynamic dispatch.

We are sending the reply chan which allows the callee to forward it on and carry on receiving requests, allowing multiple senders even when requests haven't been completed.

We are also allowing channels to be returned and requests resent if that happens. So if a process can't handle a request, it can return a channel to another process, allowing the request to be sent to a 'higher' process to see if it can respond.

A combination of the above two can also be used.

I assume there are more strategies that could be implemented. I've scratched the surface.

There is some code to go with the lab. The code was derived from fs(1), which was derived from sh-alphabet(1). However, I don't think the specific technique I described above was used in either. The code does not yet implement the shell expression described above, but a simpler one,

% xy mount {styxmon {auth {dial tcp!host!styx}}}


inferno-lab 78

Sunday, June 17, 2007





The acme-sac tarballs I've been working on producing are intended to be compact, contain all the source, and be runnable once it is unpacked at its destination.

The complete copy of it's own source means it can grow and adapt to its own environment. It can also create and host the tarballs of itself, thereby replicating and dispersing itself.

This structure makes me think of spores. The running acme-sac instance on my laptop is creating spores which I'm casting out from my laptop across the internet hoping that new copies will unpack in fertile soil and form new living cells that themselves can grow and reproduce.

This analogy is not new. And there are books on biomimicry for subjects other than computing. The question I have is, if the analogy is followed further and more explicitly what will be the results?

Or put another way, should large software systems intentionally use biomimicry as a architectural solution?

Alan Kay has used the cell analogy explicitly when describing object oriented programming, with its use of encapsulation to protect the inside of the object from interference, and message passing between objects as the means of building functionality.

Alan Kay also drew inspiration from the way the internet was being built, whose growth and operation also mimics biology, where each host is a kind of cell communicating with other hosts only through message passing. This approach has scaled very well. Is it because it mimics nature?

Another way to look at this is that if we don't mimic nature we are doomed to fail. Failure in the sense I mean here is that inferno as a software species will not survive longer than the lifetime of the author.

In my analogy of acme-sac and the cell, the cell is not an object, or a single thread, but a single VM with its own file tree.

So far there is no purpose assigned to any individual cell. There is no purpose assigned to the software for the end-user. Each cell is more of a general agent, or universal object, that finds many local purposes, hopefully useful to the end-user so that it survives selection pressure and gets replicated. The only overall purpose for the systems architecture is to survive.

The goals for the tarball are to be runnable on many hosts. To contain source so it can adapt. Be a general agent so it can replicate, rebuild, and host the spores to disperse to new hosts. Maybe these goals don't even need to be stated but are always implicit in the system.

For example, inferno-os was already distributed over the internet, and selection pressure has already caused inferno-os to evolve, such that acme-sac is a local variation.

And there are others, a little known race lives in Australia, I believe.

But maybe by being aware of the context it might help survival.

Selection pressure might make the code smaller, more compact. Might bootstrap the system to higher levels of complexity.

To survive it needs host environments with disk space, cpu and networking, and power, and a symbiotic relationship with people.

Tuesday, June 12, 2007





It's an emulated world. Emulators I've used just within the last 12 months.

  • QEMU: Plan9, Linux
  • BeebEm: BBC Micro
  • RedSquirrel: RiscOs 3.1
  • VisualBoyAdvance: GBA
  • DeSMuMe: Nintendo DS
  • Smalltalk VM: Squeak
  • Lisp VM: Scheme
  • Dis: Inferno
  • JVM: Java
  • Microsoft CLR: .NET

Thursday, May 31, 2007

bazaar development mode


bazaar development mode


I've created a new group for Acme:SAC development.

The purpose is to develop acme-sac in the open by following the bazaar development model; all code changes are submitted as patches through the list; frequent releases of tarballs; and releases of both stable and experimental trees.

Hopefully this means developers maintaining their own trees can cherry pick patches from the list. It doesn't require synchronization through source control system or membership to any project.

For example, if you maintain your own inferno-os tree and see a fix you want, you can just apply the patch from the email using gnu patch.

All that is required for someone to contribute is a download of an acme-sac tarball, a diff of their code change and email the list.

Thursday, April 26, 2007

lab 77 - the unexpected markov


lab 77 - the unexpected markov


This is another unexpected use (again) of the markov program from the `The Practice of Programming', section 3.9 [1]. I wrote an implementation of markov under Limbo, and had fun feeding it with any texts (books, articles, interviews, novels ...).

But recently I've been also playing with caerwyn's synth [2] which included in acme-sac, and thought why not feeding markov with music?

Find out the answer by yourselves, I'll just provide some small hints in the accompanying guide file.


[2] synth from acme-sac under appl/synth


bachm.mp3 (the original bach file bach.mp3)

Sunday, April 15, 2007

lab 76 - using svn with gcode projects


This is not a typical lab, instead are some suggestions to work with svn repos (the ones provided by gcode); like inferno-lab or the rest of Inferno related projects at gcode.


To work with svn the easiest way is to install subversion under the host os and write a wrapper to have the svn command available under Inferno, like:

cat > $home/dis/sh/svn << EOF
load std

wdir=`{hp `{pwd}}
if {~ $emuhost Nt}{
 os -d $wdir svn $* | tr -d '
 os -d $wdir svn $* 
The svn script relies in hp, to convert the Inferno paths to host paths, so here is hp:
cat > $home/dis/sh/hp << EOF
# convert a inferno path to valid host path for os(1)
load std

if {no $*}{
 echo 'usage: hp  # to get host path'

if {~ $emuhost Nt}{
 fn slashpath () { sed 's|\\|\\\\|g'; }
 # put two where there's one
 emuroot=`{echo $emuroot | slashpath | slashpath}
 for p in $* {
  cleanname -d `{pwd} $p | sed -n '
  /^\/n\// {s|/n/(.)/|\1:\\\\|p; q;}
 # host letters subst
 # hls="{ns | sed -n '/#U.+/ s|^bind -. ''#U(.*)'' (.*)|s\|\2\|\1/\|p|p'}

 for p in $* {
  cleanname -d `{pwd} $p | sed -n '
  /^\/n\// {s|/n/local/|/|p; '^$hls^'; q;}
After giving them executable permission
% chmod u+x $home/dis/sh/svn
% chmod u+x $home/dis/sh/hp
and binding with bind(1) $home/dis/sh to /dis,
% bind -b $home/dis/sh /dis
this line can be added to your profile, to have them available when sh starts. After this commands under $home/dis/sh can be executed as any other sh(1) shell command available under /dis. To make an example now one can checkout inferno-lab running:
% svn checkout inferno-lab
And the same for the rest of svn commands, to check the status, diffs and commits. Good luck


svn, the original svn from acme-sac.
hp, the initial hp.

Thursday, April 12, 2007

lab 75 - scope & experiments


lab75 - scope & experiments


Since i did lab 67 i've been trying to improve/fix the t3 port and experiment with it. So this post has small report of the T3 port status, and some experiments under Inferno; note that they're not dependent of the handheld.


Some of the thing i've fixed are:

t3 touchscreen: Perform corrections to make it work as it should, this was important since it has direct impact on using Acme, and the rest of the system.

blank the lcd: Added code to turn off the lcd while playing music, so battery lasts longer. To do so i added an blank command to the devapm.c written by Alexander Syschev, and wrote a blight script that manages the lcd backlight,to control this from acme.

Since this changes apply to the t3 port, they can be found under lab 67 of the inferno-lab

While i haven't been able to fix the following segfaults, i've been able to obtain dumps and open them with gdb. So i've been able to find where the crashes happen, but still don't understand why they happen, so i'll have to read more.

* sys->print('%g', ...): with floating formats strings: "%g", "%f",when emu crashes it says

 "malloc failed at <addr>"
and if i disassemble emu, i can find that the <addr> corresponds to dopoolalloc(), and the core dump says that the crash happends at /libmath/dtoa.c:/rv_alloc\(i\)/ .

* emu -c1 /dis/sh.dis: emu starts loading Emuinit /appl/cmd/emuinit.b, it's able to load a few modules on which /dis/sh.dis depends (disdep /dis/sh.dis) but it crashes while loading Filepat, this seems harder since i'm still learning about arm architecture, and i know even less about jit.


Some of them were begun on lab 67 but i've improved them a bit and it's worth to dedicate them a new post.

scope.b: Parallel to the work of caerwyn with the DSP toolkit, i thought that it would be interesting to have a flexible oscilloscope, to view signals. So i started with the scope.b of lab 12, and started to add:
make it look nicer (add grid, and signal info)
make it more flexible, adding some options and widgets to control the oscilloscope. Also in this direction: allow resizal of the oscilloscope window.
and more
frequency spectrum of the signal (using fft).

And the result has been:

From inferno photos
If you want to play with it there's a guide file that i wrote while coding and can give some ideas about it's usage.

voip: this basically using stream to stream the recording of a microphone to the headphones of another computer, in short:
styxlisten tcp!firulillo!styx export '#A';
mount tcp!firulillo!styx /n/remote;
stream /dev/audio /n/remote/audio;
after some attemps i decided to pack it under a script that could be used both for publish '#A' (server) and to mount '#A' (client), the problem is that if you try, the sound appears delayed and some echo.

Reading stream(1) i noticed that stream ussies writes of bufsize (by default Sys->ATOMICIO: 8192), and to do it has to wait until it has bytes bufsize, so here was the reason for the delay and echo. Playing with bufsize, i set it by default to 1024 bytes, this works on a local net, probably it'll need adjustment for other situations.

player: i started this as a music player for the handheld, but i've ended using it under the PC, it's simply a script that lists the music files at cwd and plays them with a music player available in the host os: mplayer. A nice thing is that one can export a dir containing a collection of music files with:
 styxlisten -A 'tcp!*!styx' export /n/local/music
mount them from another machine under /n/remote and play them running:
 mount -A 'tcp!$server!styx' /n/remote
cd /n/remote && player

kbd2skini.b: This is just an idea in the direction of the DSP toolkit, currently the DSP toolkit is able to generate music, from standard music formats mainly contained in files (midi,skini).
I would like to play with the input devices available under inferno to use them as instruments. So i started to gather for similar things and found pc2midi (and some useful tutorials).

I've choosen the kdb as starting point, but it has it's limitations as it's discussed on the previous reference, since you don't have a pressure indication, instead the state of each key is either pressed / or not pressed, moreover there's no way to notice that there're multiple keys pressed at a time. We'll see what can be done with this, but could this be more than a joke?



Saturday, April 07, 2007

lab 74 - effective inferno


lab 74 - effective inferno


I read the Effective Java book recently. Every language needs a book that describes how to use it effectively. I wish there was a book for Effective Inferno.

Although common techniques may work in Inferno I know for sure there are some uncommon ones that may work better.

I'll try and describe at least one recipe that could be a chapter in that book.

To people who've asked me what distinguishes Inferno, I've answered that it's the concurrent language Limbo, or that it's a portable OS that runs on a VM, or that it uses the Styx protocol to create a distributed system, or that it's about software tools that work together with the Inferno shell.

These are the ingredients; on their own they are not unique to inferno. Limbo looks very like C and to a newbie it's not obvious what is special about it. There a lots of little niceties that make Limbo pleasant to use, e.g., array slicing and tuples, but these are details; they do not greatly impact an application architecture.

A lot of the Unix-like commands under /appl/cmd/ are straight ports from Plan 9. This is old school. And the fact that they're implemented in Limbo does not greatly differentiate them from their C counterparts. The special flavor of Limbo does not come out when used this way.

The OS interface also looks much like Unix or Plan 9, except for the VM piece, so what is special about that? The advantages of platform independent come out when doing distributed applications. So there must be techniques that need to be learned to do that effectively.

In Inferno it's a new world where each ingredient might be familiar to you in another context but we can cook things a little differently here.

Extension Points

For any system we need to know how we might extend it and how we combine the pieces that are already there. Inferno provides a number of interfaces at different levels of granularity where the programmer can extend the system. Lets quickly review the most common ones, available in most systems.

1. We can write libraries and extend the systems with new functions; such things as parsing file formats or protocols. We can't live without those libraries. Most of the ones in /appl/lib are of this form.

2. We can create new commands and add to our software tool set. Commands can be developed that do one thing well, and grow the system organically.

3. We can use the shell to combine commands using pipelines or to customize command interfaces or extend the system further with commands implemented in the shell.

These are the contributions from our Unix legacy. And they are great! Plan 9 followed and gave us everything-is-a-file, the 9p protocol, and private namespaces. Inferno, being a direct descendant of Plan 9, inherited all that.

4. We can build filesystems to provide new services. We can also architect a distributed system by basing applications around the Styx protocol.

We don't have to use Limbo to extend inferno in this way, we can use any language, and there are already several implementations of the styx library. The styx protocol is itself an incredibly powerful and flexible way of extending the system. It permits re-use of existing tools that operate on files.

5. We can extend the system as clients to file services. Two big examples from Inferno and Plan 9 are Acme and the Plumber. Because of the loose coupling between client and server we can extend the system while it is running. We can write new Acme plugins while we are working within Acme.

This is a great pattern for developing services that you want extended. Don't just be a client of the system extention points but create your own extension points.

You may think this is where I'd stop. This would be enough right? What a great system. Plan 9 is a lovely system for all the above reasons. But we haven't got unique to inferno yet. And there is something more lovely. Bear with me, this will take a few paragraphs.

6. Limbo supports dynamically loadable modules. We can define an extension point as a module interface that the client is expected to implement. Then we can load the client, one of possibly many implementations, as needed.

This is distinct from a module interface used by libraries where in practice there is really the one and final implementation. [1], [2].

What I mean here is that there a many clients that might do distinctly different things but are able to use a shared interface with other clients.

The simplest example is the Command interface with the shell controlling loading of modules.

Command: module {
 init: fn(ctxt: ref Draw->Context, argv: list of string);

This is the extension point. You implement a client that supports this interface and then the shell will call it. Implicit in the interface is the fact that you run as a separate process and you have file descriptors for stdin, stdout, and stderr.

Of course, we all know this interface because it's similar to ones used by shells on most operating systems.

Now you say, I've already covered this in point 2. But this is different. Inferno allows us to define new interfaces, that can be specific to our problem.

For example, the Inferno shell defines a more complicated interface for shell loadable modules that extend the shell and use shells syntax to for structured arguments.

But here's the kicker. Inferno's shell implements the Lispish pattern of using a standard syntax to represent data and code. Instead of parens it uses braces but other than that we have lists of nested lists.

 {f x {g y z } }

7. We can extend the system by re-using shell syntax to structure the input to our own applications

The shell parses a block, does syntax checking but doesn't eval it. It leaves it up to the command to evaluate, either builtin or external. So we can implement the shell pattern, but not use the shell builtin infrastructure. We define our own evaluator and pass it a shell block. We define, in a sense, our own shells specific to a problem domain.

The other big piece to this is that the module interfaces can include typed channels and that each run in a separate process so that the modules can communicate to each other, working together and forming the equivalent of pipelines, but with types.

I'll give a few examples to really push this point. This is not a trivial feature and it's easy to overlook.

The prime example in inferno is the fs(1) command.

Fs uses shell syntax but doesn't use shell to evaluate it. Fs is a tree walker that allows arbitrary mixing of components to operate on a tree. It's a remarkable command because it combines sh syntax reuse, with the module extension point for clients that use channels to communicate between processes. Its a lovely thing. The extension point looks like this.

Fsmodule: module {
 types: fn(): string;
 init: fn();
 run: fn(ctxt: ref Draw->Context, 
  r: ref Fslib->Report,
  opts: list of Fslib->Option, 
  args: list of ref Fslib->Value): ref Fslib->Value;

Here's an example of how I use fs to build the acme-sac distribution. I filter out .svn files, filter out .sbl and .dis files below /appl and object files below /sys, then copy the whole tree using a proto file to define the components to copy.

   fs write /n/d/acme  {filter  
   {and {not {match .svn}} 
   {not {match -ar '(/appl/.*\.(dis|sbl))|(/sys.*\.*(obj|a|pdb))$'}}} 
   {proto /lib/proto/full} } 

It's very lispish in that we've built a domain language, in this case for tree walking, which we can add to, a command at a time, re-using existing commands in possibly novel ways, and using shell syntax to represent the whole expression.

By combining loadable modules, sh syntax, a uniform interface, channels, processes and files, we have a unique programming environment. This is a powerful pattern that should be repeated.

This kind of programming is now quite unlike any other. It establishes a pattern for structuring applications. It leverages some of the quality ingredients inside inferno.

Another example is the sound synthesizer I've been working on. I'm consciously imitating fs here. I use a simple module extension point with processes communicating on channels, a sh syntax that combines the modules.

Clients implement an interface such as this,

 Source: type array of ref Inst;
 Sample: type chan of (array of real, chan of array of real);
 Control: type chan of (int, array of real);
IInstrument: module {
 init: fn(ctxt: Instrument->Context);
 synth: fn(s: Instrument->Source, 
  c: Instrument->Sample,
  ctl: Instrument->Control);

I defined a module called Expr that operates very similarly to sexprs(2) to handle the shell syntax. Then i can combine the modules in all sorts of ways on the shell command line:

   synth/sequencer {master  
   {fm  {waveloop sinewave} {adsr 0.01 0.21 0.3 0.08} } 
   {delay 0.085 0.4} {delay 0.185 0.2} 
   {delay 0.485 0.1} {delay 0.685 0.08}  
   {proxy {onezero} {lfo 0.18 1.0 0.0} }  } 

We're still not done. We haven't used all the ingredients. Lets throw in Styx.

Lets combine all the above, plus the fact that dis is completely platform independent so we can compile once and run anywhere that inferno emulator runs.

The example that would illustrate this is VN's grid. But I'm guessing since I haven't used it. You can also see all these ingredients combined in sh-alphabet's grid. Sh-alphabet is quintessential Inferno.

From these ideas I'm trying to implement a mapreduce framework. I will use sh syntax to form the expression dynamically, use module extensions to write new map and reduce functions, use Styx to create the mapreduce infrastructure, use dis to distribute code and have channels and files for communicating between processes.

The module extension points are as follows,

Mapper : module {
 map: fn(key, value: string, emit: chan of (string, string));

Reducer : module {
 reduce: fn(key: string, input: chan of string, emit: chan of string);

The mapreduce command is itself a file system that is exported to remote hosts so that clients read and write to it, reading instructions on what to do next, writing back status, so the master can keep track of what's going on. Command usage might be as follows,

mapreduce {reduce {map {reduce {map path mapfn} reducefn} mapfn}  reducefn}

Or using pipeline notation, similar to how sh-alphabet uses it,

mapreduce {map path mapfn | reduce reducefn | map mapfn | reduce reducefn}

Now we're talking. This is Inferno's sweet spot.

This recipe should be pushed HARD in inferno.

I'll stop here, even though I still haven't played the polymorphism card. Sh-alphabet does, but it's a complicated example to swallow.

I recommend programmers start by using fs(1). Then look at how your own programs can be re-worked to factor out modules using a module extension point that uses channels, and then use sh syntax for combining those modules.

The next step may then be to distribute those modules by building into it a styx service.


[1] Actually, there is a slight variation in the library idea in that a library can have multiple implementations, a kind of polymorphism. The Imagefile interface is of this form. This is a slightly more sophisticated library where we load a particular library depending on the kind of operation. But a user of the system is not really expected to extend the system at these points, even though they might. (Other examples are Filter and Encoding)

[2] Another variation on the command polymorphism is that we can bind alternative implementations over our namespace. But in practice this seems to be done rarely. Binding /acme/dis/cd.dis over /dis/cd.dis is one example. Maybe this could be exploited further.

Sunday, March 18, 2007

lab 73 - MIDI


lab 73 - MIDI


I've written a module to read MIDI format files. I needed this because I wanted more input for my software synthesizer. I was getting bored listening to the same old track, and I haven't yet come up with any computer generated music. This seemed like a quick and easy option to get a large amount of music to listen to.

The code reads in the whole MIDI file and stores it in memory, using an ADT for the Header that contains an array of Tracks and each Track has an array of Events.

I also wrote a midi2skini command that interleaves the multiple MIDI tracks into a single stream of skini messages for the synthesizer (see earlier labs). It sorts and orders the events converting tick delta to realtime.

I've been trying this out on some bach midi files. It's been working quite nicely with the organ like sounds produced by the inferno synth.

% echo 1 > /dev/jit
  % midi2skini bwv988-aria.mid | sequencer ...

You need JIT enabled when using the sequencer.



Thursday, March 15, 2007

lab 72 - wikipedia


lab 72 - wikipedia


I've been working on modifying dict to use the Wikipedia database. I mentioned this in lab 70. So here's what I've got so far. It's not beautiful; the wiki syntax parser needs a lot of work

The general idea is I want to use acme-sac as a Wikipedia browser. But there are other reasons too, such as gaining experience of using inferno to work on some large text databases.

Acme brings some nice things to a database like Wikipedia. Because of the nature of acme you don't have to rely on people making wiki links to find other articles. You can right-select almost any text to search the index. Right-selecting single words often opens a Wikipedia disambiguation page.

If you want to get this working try following the steps below.

You need to use the latest acme-sac copy from svn. It has some fixes to support big files, including Acme.exe, otherwise none of this will work.

Download the Wikipedia database. This site will explain about Wikipedia downloads. Go here for the dump files. For the English version look for a file called something like pages-articles.xml.bz2. This file is about 2.1 GB. Download it and extract it.

This was the first snag I hit. I didn't have NTFS on my laptop drive, only FAT32, so I wasn't able to extract it into a single file. I started extracting it to smaller files and looked at creating a virtual big file using a Styx server; but before I got anywhere with that idea I bought an external hard drive, reformatted to NTFS to handle big files, and just went with the single file approach.

Extract the file somewhere and rename it or bind it to /lib/dict/wikipedia. You then need to build the /lib/dict/wpindex file.

Generate the index inside inferno,

 % dict/mkindex -d wp > rawindex

I had to step outside inferno for the next bit and used 9pm archive for the plan9 sort and awk commands. Reformat and clean the index entries using /appl/cmd/dict/canonind.awk

 % awk -F' ' -f canonind.awk rawindex 

then sort and remove carriage-returns

 % sort -u -t' ' +0f -1 +0 -1 +1n -2 < junk |
    tr -d '\r' > /lib/dict/wpindex

Hopefully, you can now type in adict -d wp in acme-sac and in the new window type some text, right-select it and a result from wikipedia will be found.


svn revision 71

Thursday, March 08, 2007

lab 71 - pruning


lab 71 - pruning


I've tried to reduce acme-sac source tree to what I consider the core. I'm cutting out dead wood to encourage new growth. Except that what counts as dead wood is highly debatable. I removed files I tended not to use, but my well traveled paths through inferno are not necessarily going to match yours. So while acme-sac still stands alone, it depends on the larger world of inferno-os for diversity and range of applications.

The smaller mass of code is intended to have less inertia. Not only can a programmer more easily understand it all, but he can also make changes system wide and so turn it to new directions. For example, if a new system library were to be imagined that could be applied to the whole limbo code set, the size of the code should not present so much resistance that an individual would not attempt it.

This reduction effort started in lab 58. The source from that became acme-sac. The recent "right-sizing" removed a lot of code. It's now down to a size I can feel comfortable with. What is the total lines of code a single developer can manage?

Number of lines of C code in acme-sac is 131,930. Compared to over 750,000 lines in the inferno-os distribution. Number of lines of Limbo code in acme-sac is 282,970. About 466,872 lines in inferno-os.

Some of the big changes I made are removing the native os tree; it does not seem relevant to acme-sac at this time. I moved all the C source under /sys, and rewrote all the mkfiles. I removed all libraries acme-sac does not depend on: libtk, libprefab, libdynld, libfreetype, libkern, libnandfs. I also removed a significant amount from /appl/lib and /module.

For a very ambitious project in code reduction, read this (PDF), from Alay Kay and associates at Viewpoints Research.

Friday, March 02, 2007

software temples


software temples


I've been listening to the seminars about long term thinking from the Long Now Foundation. I highly recommend them.

The purpose of these seminars is to encourage long term thinking. They succeed at that. For the last few weeks as I've been listening to them I've been contemplating the future of software and the history of civilization. I can't say that I've ever been interested in history. But after attempting to look forward more than 100 years it only now occurs to me what a good idea it is to look backwards far enough to see the patterns and cycles in human history.

Some of the seminars are inspiring. Danny Hillis talks about his progress in constructing a clock that will run for 10,000 years. At first it sounds insane, but after listening to him I can't help but admire him and what he's doing. The real purpose of such a scheme is to trick yourself into a different kind of thinking. Similar to the idea of forcing change in your behavior by changing your social role, such as giving a lecture will force you to learn the subject matter, building a 10,000 year clock forces a discipline of long term planning that generates new ideas which otherwise wouldn't occur.

Inevitably my thinking is grounded in the world of computing. Whatever new lights cross the sky I relate it to what I already know.

When I started programming I thought 5 years was long term. I couldn't think of an application being used longer than that. Now I've been working 10 years I have to maintain software at least that age, some of it even written by me! But I'm still not thinking long enough. Where I work the software I write must last 30 to 50 years, the typical age of the products they support. If I don't at least try then I know future maintainers will feel the pain I feel today, the pain of supporting software from extinct vendors running on ailing platforms with years of accumulated bloat.

What does it take to support software 100 years, or a 1000?

Clay Shirky in his seminar "Making Digital Durable" says that over a long enough period of time digital preservation always comes down to social issues. To keep a data format long term you need to preserve the application that interprets it. To run the application you need the operating system. To run the OS you need the hardware. The layers of abstraction don't stop. The longer into the future you want to preserve something the more layers you must deal with by somehow emulating them. They become social issues because the only way you can deal with this is to lower risk; use cheap and flexible components that have wide, popular dispersion and high degeneracy, by which he means multiple degenerate formats that represent the same information.

Would it be possible to create a long term social organization that can preserve information over millennia?

Say you as an independent agent have thought about this problem and you decide you need to coordinate with a large group. Then say many others reach the same decision, and you begin to coordinate using a system that only emerges through your coordination. By subscribing to this system you may have lowered the risk of preserving your valuable information. Only time will tell. The whole coordinating group improves their chances too if the artifacts they are preserving have some positive benefit to their survival.

When you look at history for examples of long term social organizations only a few really jump out. Religion and monarchy are two stand out examples of institutions that have lasted over 1000 years. In some cases, like the English Monarchy and the Church of England, the institutions are combined. More recent social systems I'd consider fit the same pattern are the Free Market, Science, and most recently the Free Software movement.

Given that list of systems for social organization, and given you wanted a message preserved over 1000 years, based on past performance, which would you choose?

All of these systems are an emergent phenomenon of complex adaptive agents. Stephen Lansing describes one in particular in his seminar, "Perfect Order: A Thousand Years in Bali." He describes the Water Temples in Bali, a religion that coordinates a large group of rice farms to optimize their yield. This system has remained remarkably consistent for 1000 years.

Douglas Adams discuses the same religion in his essay, "Is there an Artificial God?". The Bali system was very efficient but in the 1970s the IMF tried to get them to modernize their processes using 5 year plans, technology packets and genetically modified rice. The system improved briefly, but then because the coordination was gone among farmers, the pests crept back into the fields and yields worsened. Eventually the Balinese farmers returned to their system of Water Temples to coordinate the timing of planting and flooding their fields that had worked so well to eliminate pests.

Douglas Adams says, "It's all very well to say that basing the rice harvest on something as irrational and meaningless as a religion is stupid—they should be able to work it out more logically than that, but they might just as well say to us, 'Your culture and society works on the basis of money and that's a fiction, so why don't you get rid of it and just co-operate with each other'—we know it's not going to work!" [1]

Are the Water God's in the Balinese sense real or artificial? Is it correct to even ask the question in that way? I think it might be wrong to judge it as truth or fiction and instead observe it as an emergent phenomenon that increases the long term survival of the group. To put it in evolutionary terms. There is no absolute fitness in nature for large scale long term social systems. (The Big Here Long Now)

'There is no such thing as the "fittest" kind of organism. We can only talk about how an organism propagates in a given niche, how its life strategies have become adapted to that niche. It is no more or less fit than another kind of organism that has adapted to some other niche.' - Ursula Goodenough, "The sacred depths of nature."

In another seminar Sam Harris in "The View from the End of the World" takes a long term look at religion. He reaches the conclusion that belief in Religion is enormously maladaptive and dangerous for human survival. But I think the weight of evidence is on the side of Religion. The world religions we have today have survived a very long time, and they have survived because their subscribers have continued to prosper. Their past performance is pretty good and the prediction that they will suddenly and catastrophically fail in some way doesn't seem reasonable. In the ecology of social systems in the world there is diversity, they compete, adapt and evolve. The books of religion, the ceremony, the beliefs no matter how strange bind the group.

What has this to do with software?

When complex adaptive agents made from software become more common, and they optimize for long term survival we will begin to see software take on behaviors that bear similarities to human social systems.

If people (complex adaptive agents) care about long term software they may adopt the trappings of religion to see the long term survival of their systems.

My long term bet is that we will see a software religion emerge in the next 100 years.


[1] Aaron Swartz saw a similarity in the terminology used to promote free markets and understand economics.

Wednesday, February 28, 2007

lab 70 - dict


lab 70 - dict


I ported Plan 9's dict to Inferno. At the moment it works with the Project Gutenberg dictionary and Roget's Thesaurus. Of course, since I'm an acme fan, I also ported adict to browse the dictionaries in acme.


I've been looking to collect plain text databases that would fill a small portable 60GB drive. Then use acme as a plain text browser. When I've thought about Inferno as a system I've paid too much attention to the code and not to the data it might work on. Collecting databases is an attempt to broaden my view.

Two databases to start with are Wikipedia (8GB uncompressed not including images) and the whole of Project Gutenberg (4GB compressed). I've started working on getting a local copy of wikipedia to display in acme using dict. I got this working for a small subset of wikipedia, the first 1GB, just to try things out. It works well enough that I'm now working on the complete archive.

Monday, February 26, 2007

lab 69 - menuhit


lab 69 - menuhit


After removing Tk I still needed some menu functionality. If you look carefully at the picture in my last post you'll see the little green menu!


I ported menuhit from plan9 and made it a module in inferno. I've used it in a few wm commands checked into svn. Because I'm Tk-less I don't have the tk toolbar so menuhit, like rio, is the way of controlling window operations, basic stuff like exiting from clock.

Why no Tk? I didn't do this because I hated Tk and the programming interface. But I didn't like the way it looked. I use acme-sac the whole time so I essentially stoped using an Tk based commands. Also, I thought there's no way I'd ever give an end-user an application that used Tk. So I removed it just to see what would happen next. The most immediate consequence is a lurch towards rio like interface. But I don't think that's the end state. I'm also looking at the browser as an end-user interface; that means charon.

A lingering idea is that document based interfaces are ideal for some applications. They're easy to navigate; the look and feel of them really comes down to a problem of typography. They are a classic interface.

By the way, if you've checked out the latest from svn, to launch charon in the acme-sac home directory from the windows command prompt, or shortcut,

    acme.exe /dis/wm/wm.dis charon

Thursday, February 22, 2007

lab 68 - stand alone charon


lab 68 - stand alone charon


I've definitely got behind in blogging my Inferno work. And it's not that I haven't been doing anything. I'm just not putting in the effort to write stuff up. Salva has been carrying the ball and thanks to him for the last few posts. I'm going to try posting more regularly but smaller things.

A lot of my recent work has been going into acme-sac. I've checked code into svn at If my posts describe work I've checked in there I won't include the files as part of the blog.

Today's post is a pretty picture.


The story is I removed Tk from acme-sac and then worked through the consequences. One of them was to get charon working without Tk, which turned out to be not that hard.

Whereas Acme is an interface for programmers, I think this is a great interface for end users.