Posts

The Cartesian Theater of AI

Image
The Cartesian Theater was an intuition pump proposed by philosopher Daniel Dennett to describe a false theory of consciousness. Inside the mind is a stage watched by a humunculus that observes what passes on the stage, hears the sounds, and controls the rest of the mind by responding to the action of what the smaller version of the self sees. Of course, what makes the smaller self conscious? It is an infinite regression. Our understanding of mind is that there is no such theater. There is no one place in the mind that is the locality of consciousness. I want to offer another thought experiment based on the concept of a Cartesian Theater. Imagine a theater containing a screen with a camera pointing at it, and a speaker with a microphone directed toward the speaker. The camera is pointing at the screen. The mic is pointing at the speaker. However, instead of there being an infinite loop due to a direct connection between the camera and screen, insert an AI in the loop. The imag

Training AI with culture

Image
How is culture relevant to the training of artificial intelligence? A guess about the origin of human level intelligence is that it was mutually arising with complex society and culture, that is, with the origin and evolution of memes (Dawkins). Culture is composed of memes which are replicated by the two step process of being (1) recreated within a mind and then (2) enacted as some observable trait; creativity being necessary to reverse engineer the latent content of a meme, its meaning, in or order to accurately output the observable trait (Deutsch). A human using creativity serves memes as a kind of “meme machine” by making high fidelity copies of cultural artifacts or traits such as manners, rituals, totems, and tools (Blackmore). The complexity of cultural artifacts has grown exponentially from stone tools to space telescopes. Culture is a deep library of memes carrying objective knowledge. That knowledge contains within it truth about the world, while also being riddled with err

lab 111 - wavloop

Image
WAV file format contains audio sample data and optionally meta-data that describe the offsets of sample loops and cue points. The loop offsets are used by sampler software to generate a continuous sound, and the cue points mark the point in the sample data where the sound fades away after the note has been released. A WAV file "smpl" chunk will identify the byte sample offset of the start and end of the loop in the sound data. Using wavplay.b as a starting point I tried to loop a sampled sound. My sample data comes from Virtual Organ software GrandOrgue and the sample sets created for it. In this case I'm using the Burea Funeral Chapel sample set. My first test was simply to treat the sample as-is and loop the sound using the given offsets. This did not give good results with a notable noise as the data from the end of the sample joined with the beginning. I realized nearing the end of writing this post that the mistake I made was treating the offsets as counts of

lab 110 - inferno archive edition

I've been occupied recently with archiving my digital media. I've been copying home videos on DV tapes to hard-disk, ripping audio CD's to WAV files, gathering photo collections, and trying to copy documents from Iomega disks, floppies, and my dusty old Acorn RiscPC. The plan is to have a copy of this data to give to each of my children. My Dad recently scanned and sent me all his photographs of me and my siblings growing up; he also included pictures of himself and my Mother when they met in Africa. With technology today each generation can build a digital library of family history to hand on to the next generation. In the past a family album may have been passed on to only one person. The accumulation of digital data still presents problems. It requires discipline to store files that are open and not locked into devices or proprietary formats. With digital preservation in mind I've tried to use file formats recommended for long term archiving. WAV files for audio, D

lab 109 - wm/view true color

Image
There has been a three and a half year gap in my posts to this blog. In that time I hadn't done any Limbo programming. I've used Acme as my editor everyday, but I was drifting towards using Notepad++ more often. In the past couple of months I've had the time to contemplate doing some hacking projects. I wanted to explore what I could do with Inferno for multimedia file types. This lab was the first thing I tackled in using Inferno again. I had to open up the Limbo paper to remember even some basic syntax. It bothered me that wm/view only displayed images using the Inferno 256 color map. Charon didn't have this limitation and I thought it had something to do with their respective image libraries. They don't use the same code. I extracted Charon's img.b code out into another view tool only to realize once I'd finished that the difference was not in the handling of JPEGs or PNGs but in the remap of the raw image to an Inferno image after the image was load

lab 108 - wavplay

wavplay plays a WAV file. It is merely a combination of the wav2iaf and the auplay commands already in Inferno. I have no audio in IAF format, but I am putting together 100s of GBs of wav files as I'm ripping my CD collection. % bind '#A' /dev % wavplay track.wav FILES wavplay.b

lab 107 - midiplay

NAME lab 107 - midiplay NOTES Midiplay plays back MIDI files. It uses the synthesizer I described in lab 62 and the MIDI module from lab 73. The command takes only one argument, the path to the midi file. I've included one in the lab to demonstrate. Bind the audio device before using it. % bind -a '#A' /dev % midiplay invent15.mid The synthesizer has 16 note polyphony. It uses three oscillators, one at the pitch, one at half pitch, one at double pitch. There is also a filter, two delays and a vibrato. The sample rate is 8000hz and there is one mono channel (MIDI channel events are ignored). It performs well enough to work on my laptop without JIT turned on. All the synthesizer parameters can only be tweaked from inside the code at the moment. FILES lab - 107

lab 106 - UNIX RUDP Support

NAME lab 106 - UNIX RUDP Support NOTES A simple port of the Inferno native RUDP support to a standalone UNIX environment (with a client and server example). This particular example is single threaded and specifically intended for use with synchronous RPC mechanisms (in this particular instance a synchronous 9P client and server). It could probably be greatly improved in several dimensions, but this example should provide a minimal implementation which more complicated incarnations can be built off of. Enjoy. FILES 106/Makefile 106/client.c 106/rudp.c 106/rudp.h 106/server.c

lab 105 - automount

NAME lab 105 - automount NOTES A small modification to mntgen yields an automounter which automatically mounts a server based on path name. This is a fairly crude proof of concept with hardwired port numbers and top-level mount path -- but it would be fairly easy to make it a bit more robust. Essentially, I just added a mount command to the code which dynamically adds the directory node to the mntgen tree on reference. Then I made the file system handling call its own thread so that the mount could reference the synthetic without locking up the original single threaded synthetic file server. One problem is that unknown servers take some time to respond with file note found, which is less than desirable. Fixing this and other annoyances is left as an exercise for the reader. EXAMPLE % ls /amnt % mount {automnt} /amnt % ls /amnt % ls /amnt/localhost/usr/inferno /amnt/localhost/usr/inferno/9cpu /amnt/localhost/usr/inferno/README /amnt/localhost/usr/inferno/charon /amnt/localhos