lab 53 - granulator
lab 53 - granulator
I want to revisit the DSP code I worked on in earlier labs (3, 4, 5, 6, 7, 10, 12, and 13). The STK C++ library it's based upon has been updated since then. The latest versions include an implementation of a granulator, which is something I've wanted to do for a while.
Granulation samples the music at tiny intervals then plays the grains back as many overlapping voices. With granulation the music speed can be dramatically slowed down but still retain its pitch. A good example of this is 9 beet stretch a granulation of Beethoven's 9th symphony played over 24 hours. I often listen to it at work. It blocks out surrounding chit-chat, and it isn't as distracting to me as most music. I can't listen to pop or classical music because I get tired of the repetition or else the music might interrupt my train of thought. With granulated music I can setup a playlist of three hours of barely interesting noise and get into "the zone". (At home I tend to listen to Drone Zone from Soma which has much the same effect as granulated Beethoven.)
I want to dump the signalfs code I did last year, and go back to just implementing the DSP library, translating the STK C++ code to limbo. This means I need to re-implement the sequencer module to load Instrument modules that are based directly on the DSP library. I think signalfs didn't work out too well. In the context of this application the whole filesystem structure and converting between real and PCM byte stream didn't seem worthwhile.
In this lab I include the implementation of the granulator. At the moment it's setup to read and play one channel at 22050 rate. It's a start. Here's an example of recording some noise from the microphone, granulating it and playing it back. Granulator takes as argument the stretch factor and the number of voices, as well as the raw PCM file to granulate. The scope just provides some visual feedback to the noise you're making.
% bind -a '#A' /dev % echo rate 22050 > /dev/audioctl % echo chans 1 > /dev/audioctl % scope < /dev/audio > record1 # kill scope when done % granulate -s 20 -v 16 record1 > record2 % stream record2 > /dev/audio
I'll be working on instruments and a new sequencer slowly over the next 40 years.
If there's an overall goal I have for this it's not to build a sequencer or synthesizer like the many already available. I'd like to build something for the computer to generate music. Implementing the STK is my starting point to learn about computer music. I like projects like 9 beet stretch, or Eigenradio. I'd like a continual stream of Inferno music generated from combinations of environmental sounds, radio, samples, keyboard playing. Because Inferno rocks!
that interesting, and something i'll be interested in trying
out (i often learn fiddle tunes from recordings, and the
details are often very fleeting - being able to really
slow things down will be useful, and i hadn't found a decent
one thought about the structure of your dsp modules.
doing things on a sample-by-sample basis is going
to be slow. what's needed is a way to speed up the inner
loops, for instance by passing buffers around.
i really think it should be possible to use processes for
this kind of thing - then arbitrary modules can be plugged
together, and each one doesn't necessarily have to be
written as a state machine. an output module
could call its input source to obtain fill a buffer
with some samples:
chan of (array of Sample, chan of int).
this has got me back into the whole audio thing, which
i'd left on hold a year or so back. i want to be able
to access the macos audio subsystem from within inferno.
to that end, i've just written a little language to
make their arcane API marginally accessible, but that's another
PS. it'd be nice to have a <pre> tag here!