Sunday, May 13, 2012

Sonification - sound of sand - 5

First sonification results - I've managed to sonify my first experimental shapes. The results are encouraging. They are not very musical yet but with some optimization they might get interesting.

Small random test shape - We use the small experimental test shape for our first sonification. It has been used in my previous blog post so we're quite familiar with its properties:
The python software (see below) first extracts the X and Y values that we get while traversing the outline in a counterclockwise direction. It transforms these values into two sound waves:
Then it concatenates these sound waves (they are extremely short at a sampling rate of 44100 Hz) into a longer sound sample and it modulates the amplitude of this signal using the same sound shape.
This is a nice fractal twist and it feels like a very natural thing to do with the signal. This way the signal is made self-similar on two levels. (I don't think it would be feasible to add more than two levels of self-similarity, the signal would get too long.) 

Then we put the X and Y signal into the left and right channels of an audio stream. Again this feels like a very natural thing to do with the signal.

Big star - In the same way we generate a sound sample for the star shape of one of the previous experiments. This gives comparable results.
Notice how recognizable the X and Y values are in the signal. In the X-direction the star has two points. In the Y-direction it has only one point.

Discussion of the results - You can listen to the resulting sound files here:

The outline of a shape has been mapped directly to a 44100 Hz sampling rate. This means that a smaller shape will generate a higher note and a shorter sample. For the moment I will leave it like that because it's the most natural mapping. Later I will explore other possibilities. This means that the test shape produces a high mosquito like drone. And the star shape produces a low atmospheric, almost inaudible soundscape. Both sounds are quite abstract and unmusical and this is how it should be for the moment.
This is how the two sound files look in Audacity:
test shape
And here you see how the signal-in-signal looks in audacity if you zoom into the details.

Now we've used the amplitude domain to map shapes into sound. I'll also try to use the frequency domain for this mapping.

Python program - I'm not sure this will run correctly if you copy it directly into your Python environment. I'm using Python-XY and this has all the necessary modules pre-installed. And Blogger may destroy some of the whitespace. So this could explain some unexpected bugs.
If I could do things in a more Pythonesque way then I'm open for comments.

from Nsound import *
debug = True

# read a chaincode .chc file that has been generated by SHAPE
infile = open("C:\\Users\\user\\Documents\\shape\\shape\\04 star.chc")
instr =

if debug:
    print instr

# parse the input file - split it into words
inwords = instr.split(' ')
if debug:
    print inwords

# delete anything except the chain code
i = 0
for str in inwords:
    if str.find('0E+0') > -1 :
    i = i + 1

inwords = inwords[i+2:len(inwords)-1]
if debug:
    print inwords

# fill the x and y buffer with the chaincode values
b_x_chaincode = Buffer()
b_y_chaincode = Buffer()

x = 0
y = 0
for str in inwords:
    c = int(str)

    # convert a chaincode into a plot of the x value against time
    if ((c == 1 or c == 0) or c == 7):
        x = x + 1
    elif ( c == 2 or c == 6):
        x = x
        x = x - 1
    b_x_chaincode << x

    # convert a chaincode into a plot of the x value against time
    if ((c == 1 or c == 2) or c == 3):
        y = y + 1
    elif ( c == 4 or c == 0):
        y = y
        y = y - 1
    b_y_chaincode << y

b_x_chaincode = b_x_chaincode - b_x_chaincode.getMean()
b_y_chaincode = b_y_chaincode - b_y_chaincode.getMean()

if debug:
    b_x_chaincode.plot("x plot from .chc file")
    b_y_chaincode.plot("y plot from .chc file")

# generate an amplitude modulated x and y signal
b_x_long = Buffer()
b_y_long = Buffer()

for level in b_x_chaincode:
    b_x_long << b_x_chaincode * level
for level in b_y_chaincode:   
    b_y_long << b_y_chaincode * level

# normalize to prevent clipping of the output signal

if debug:
    b_x_long.plot("x plot from .chc file")
    b_y_long.plot("y plot from .chc file")

# make sure that the sound sample is long enough to hear anything
while len(b_x_long) < 200000:
    b_x_long << b_x_long
    b_y_long << b_y_long

# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(44100.0, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile("C:\\Users\\user\\Documents\\shape\\shape\\04 star xy_chaincode.wav")

1 comment:

  1. THere was an error in the links of the mp3 files. I'm new to Google docs. They should work now.