Monday, May 28, 2012

Sonification - sound of sand - 10

Curvature - I have used two different algorithms to determine the curvature of a shape.

Algorithm 1 - I cannot find the original article where I found the idea (I'm very sorry) but the principle is very simple:
And if we apply the algorithm to our test shape we get the following curvature estimate. This is not bad at all:
It is easy to see that the curvature is high in the corners and low in the straight edges. Also curvature is higher in the sharp corners. But the algorithm is not very precise.

Algorithm 2 - A better algorithm is given in the following article: Estimation of discrete curvature based on chain-code pairing and digital straightness (Shyamosree Pal, Partha Bhowmick, 2009). The algorithm applied to our test shape looks like this. It is much more precise and selective:
Sonification in the frequency domain - In the following sonifications you hear curvature algorithm 1 in the left channel and curvature algorithm 2 in the right channel. They are correlated but have different characteristics.
In the sonification we mapped the curvature values to the frequency domain. Below you see the spectrum of the star-shape sonification. Top channel is algorithm 1, bottom is algorithm 2. It is easy to see the points of the 6-pointed star:
 And our set of test shapes sound like this. Watch your ears and speakers - turn the volume down low!

mp3 - 01 circle - Algorithm 1 generates a lot of artifacts. Even a one pixel difference in arc-length is audible. That's why you hear the circle in the left channel. Algorithm 2 is much more precise and you hear hardly anything. In theory a circle should have a constant curvature, so both tones should be constant. But the pixellation disturbs this ideal. The result is more interesting than one would expect.

mp3 - 02 triangle - You can clearly hear the three points of the triangle. Algorithm 1 has some problems with diagonal lines. It is very sensitive to pixellation.

mp3 - 03 square - As is to be expected this is quite boring. Almost nothing happens. Only the four vertices cause some change in the sound.

mp3 - 04 star - Quite interesting to listen to. You can clearly hear the 12 vertices of the 6-pointed star.

mp3 - 05 horizontal rectangle - Is almost identical to the square.

mp3 - 06 random shape - Is approaching experimental musicality. Could be useful.


from Nsound import *
import math
debug1 = False
debug2 = False
debug3 = False
def convert_chaincode_to_x(c,x):
    if c == 1 or c == 0 or c == 7: x = x + 1; return (x)
    elif ( c == 2 or c == 6): x = x; return (x)
    else: x = x - 1; return (x)
def convert_chaincode_to_y(c,y):
    if c == 1 or c == 2 or c == 3: y = y + 1; return(y)
    elif c == 4 or c == 0: y = y; return(y)
    else: y = y - 1; return(y)
# read a chaincode .chc file that has been generated by SHAPE
def read_chc_file_to_xy_lists(filename):
    infile = open(filename)
    instr = infile.read()
    infile.close()
    if debug1: print instr
    # parse the input file - split it into words
    inwords = instr.split(' ')
    if debug1: print inwords
    # delete anything except the chain code
    i = 0
    for str in inwords:
        if str.find('0E+0') > -1 :
            break
        i = i + 1
    inwords = inwords[i+2:len(inwords)-1]
    if debug1: print inwords
    # fill the x and y lists with the chaincode values
    b_x_chaincode = list()
    b_y_chaincode = list()
    x = 0; y = 0
    for str in inwords:
        c = int(str)
        x = convert_chaincode_to_x(c,x)
        b_x_chaincode.append(x)
        y = convert_chaincode_to_y(c,y)
        b_y_chaincode.append(y)
    return(b_x_chaincode, b_y_chaincode)
# read a chaincode .chc file that has been generated by SHAPE
def read_chc_file_to_chaincode_list(filename):
    infile = open(filename)
    instr = infile.read()
    infile.close()
    if debug1: print instr
    # parse the input file - split it into words
    inwords = instr.split(' ')
    if debug1: print inwords
    # delete anything except the chain code
    i = 0
    for str in inwords:
        if str.find('0E+0') > -1 :
            break
        i = i + 1
    inwords = inwords[i+2:len(inwords)-1]
    if debug1: print inwords
    # fill the chaincode list with the chaincode values
    l_chaincode = list()
    for str in inwords:
        c = int(str)
        l_chaincode.append(c)
    return(l_chaincode)
def convert_xy_frequency(xy, minxy, maxxy, minf, maxf):
    r = float(maxf - minf)/float(maxxy - minxy)
    f = (xy - minxy)*r + minf
    return (f)
def sine_duration_frequency(duration, frequency):
    g = Generator(44100.0)
    length = math.ceil(float(duration) * float(frequency))/float(frequency)
    return g.drawSine(length, frequency)
# ==============================
directory = "C:\\Users\\user\\Documents\\shape\\shape\\"
filename = "06 random shape"
extension = ".chc"
# read file into xy lists
list_x, list_y = read_chc_file_to_xy_lists(
    directory + filename + extension)
# calculate curvature by arc length
list_c1 = list()
k = max(3,int(0.1*math.sqrt(len(list_x))))
for i in range(len(list_x)):
    ia = (i-k)%len(list_x)
    ib = (i+k+1)%len(list_x)
    dx = list_x[ia]-list_x[ib]
    dy = list_y[ia]-list_y[ib]
    dxy2 = dx*dx + dy*dy
    dxy = math.sqrt(dxy2)
    curv = (2*float(k) / dxy)
    if debug2:
        print list_x[i], list_y[i], round(curv,2)
    list_c1.append(curv)
# read file into chaincode list
list_c = read_chc_file_to_chaincode_list(
    directory + filename + extension)
# calculate curvature by chaincode pairing and digital straightness
list_c2 = list()
k = max(4,int(0.3*math.sqrt(len(list_x))))
for i in range(len(list_c)):
    di = float(0)
    for j in range (1,k):
        ia = (i+j)%len(list_c)
        ib = (i-j+1)%len(list_c)
        dj = abs(list_c[ia] - list_c[ib])
        dj = min(dj, 8-dj)       
        iap1 = (i+j+1)%len(list_c)
        ibp1 = (i-j+1)%len(list_c)
        djp1 = abs(list_c[iap1] - list_c[ibp1])
        djp1 = min(djp1, 8-djp1)       
        iam1 = (i+j)%len(list_c)
        ibm1 = (i-j)%len(list_c)
        djm1 = abs(list_c[iam1] - list_c[ibm1])
        djm1 = min(djm1, 8-djm1)
        di += min(min(dj,djm1),djp1)
    di = float(di)/k
    if debug3:
        print list_x[i], list_y[i], round(di,2)
    list_c2.append(di)
soundpixel_length = 0.02
min_f = 120.0
max_f = 3000.0
# generate a frequency modulated x signal
b_x_long = Buffer()
min_x = min(list_c1)
max_x = max(list_c1)
for x in list_c1:
    frequency = convert_xy_frequency(x, min_x, max_x, min_f, max_f)
    b_x_long << sine_duration_frequency(soundpixel_length, frequency)
b_x_long.normalize()
# generate a frequency modulated y signal
b_y_long = Buffer()
min_y = min(list_c2)
max_y = max(list_c2)
for y in list_c2:
    frequency = convert_xy_frequency(y, min_y, max_y, min_f, max_f)
    b_y_long << sine_duration_frequency(soundpixel_length, frequency)
b_y_long.normalize()
# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(44100.0, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile(directory + filename + "_freq_curv" + ".wav")


Sunday, May 27, 2012

Three exciting lectures

Normally I find it cheap and easy to post a few links without further comments. But these three lectures fit my my current interests and emotions perfectly. So I hope they'll inspire you as much as they inspired me.

A romantic, melancholic, homesick artwork in cyberspace
Living and evolving algorithmic entities that shape our world
A historic view of information overload

Saturday, May 26, 2012

The flaming sun

Elements of the Dutch landscape - 4

Many years ago I became aware of the angry flaming sun while walking in Maastricht. This one was not so angry, more pensive and melancholic. Last year I saw that the sun was still there. It's even on google street view. Notice how one of the sun faces has been blurred (modern technology meets ancient symbol). 
Maastricht
Once I was sensitized I now see the symbol everywhere. On each of our walks I see it at least once. Why is such an ancient symbol so popular in the modern suburbs? Is it an archetype that cannot be suppressed?
I looked at a few websites for the symbolism and it's exactly as one would expect:
  • The sun stands for: truth, light, fertility, vitality, passion, healing, (male) vigor, agressiveness, power, force and leadership, dignity, courage, creativity, knowledge, self hood, life-source, re-birth, reincarnation, immortality.
  • The sun face protects from evil powers and influences.
  • The combination of the sun and the moon stands for the sexual and spiritual union of a male and a female.
Still I wonder if the people who hang these on their garden sheds and near their doorbells realize this consciously. My private pet theory is:
  • Our modern culture has banished all elements it considers irrational or primitive, especially: magic, superstition, religion and symbolism.
  • Our modern architecture has banished all ornamental elements and details. It is functional and bland.
  • But somehow we cannot live without irrationality and ornamentation. So the banished elements crop up in unexpected places.
Modern garden-variety examples (literally):
Zoetermeer
Hekendorp
Harmelen
Hoek van Holland
Hoek van Holland
And a few examples from 1600 - 1700:
Utrecht
Zwolle
Sources:

Monday, May 21, 2012

Sonification - sound of sand - 9

Spectral components - In the samples below you can listen to the spectral components of the following shapes:
01 circle - mp3 - the circle has (almost) no harmonics
02 triangle - mp3 - the triangle has odd and even harmonics - like the sawtooth
03 square - mp3 - the square has only odd harmonics - like the square wave
04 star - mp3 - star has more harmonics than square
05 horizontal rectangle - mp3 - rectangle has very similar harmonics as the square
06 random shape - mp3 - random shape has the most harmonics

Each sample consists of two parts: (1) the separate spectral components, (2) the spectral components added together. Surprisingly enough these sounds are almost musical.

We could start sonifying sand grains right away but I want to explore more possibilities first.

Thursday, May 17, 2012

Sonification - sound of sand - 8

Amplitude and spectrum combined - Now we have programmed the resampling correctly and the shape X- and Y-components are now present in both the spectrum and (on a micro level) in the amplitude of the signal. This is conceptually very satisfying and the sound has become much more interesting:

I'll start sonifying real sand grains only after I've explored all the theoretical possibilities. Have patience with me, I still have these possibilities to explore: (1) spectral components, (2) curvature and (3) mapping into "real" space using time differences between the ears of the listener and his distance from the sound source. And (4) I could use the filters of Nsound and try to map 2-D shapes into filtered white noise. Next time I'll do: (1) spectral components.

But I have sonified a few more of my test shapes:


Warning and disclaimerThese sounds could harm your sound system and could startle your pets. Turn the volume down before playing them.

I will stop exploring the X- and Y-shape components for the moment. Next time I'll try to sonify the 2-D spectral components of the shape. Just a few pictures:
 The spectrum of the star shape signal: X = top, Y = bottom.
The waveform of the star shape signal: X = top, Y = bottom.



from Nsound import *
import math


debug1 = False
debug2 = False
debug3 = True


def convert_chaincode_to_x(c,x):
    if c == 1 or c == 0 or c == 7: x = x + 1; return (x)
    elif ( c == 2 or c == 6): x = x; return (x)
    else: x = x - 1; return (x)


def convert_chaincode_to_y(c,y):
    if c == 1 or c == 2 or c == 3: y = y + 1; return(y)
    elif c == 4 or c == 0: y = y; return(y)
    else: y = y - 1; return(y)


# read a chaincode .chc file that has been generated by SHAPE
def read_chc_file_to_xy_buffers(filename):
    infile = open(filename)
    instr = infile.read()
    infile.close()
    if debug1: print instr
    # parse the input file - split it into words
    inwords = instr.split(' ')
    if debug1: print inwords
    # delete anything except the chain code
    i = 0
    for str in inwords:
        if str.find('0E+0') > -1 :
            break
        i = i + 1
    inwords = inwords[i+2:len(inwords)-1]
    if debug1: print inwords
    # fill the x and y buffers with the chaincode values
    b_x_chaincode = Buffer()
    b_y_chaincode = Buffer()
    x = 0; y = 0
    for str in inwords:
        c = int(str)
        x = convert_chaincode_to_x(c,x)
        b_x_chaincode << x
        y = convert_chaincode_to_y(c,y)
        b_y_chaincode << y
    b_x_chaincode = b_x_chaincode - b_x_chaincode.getMean()
    b_y_chaincode = b_y_chaincode - b_y_chaincode.getMean()
    if debug2:
        b_x_chaincode.plot("x value from chaincode")
        Plotter.show()
        b_y_chaincode.plot("y value from chaincode")
        Plotter.show()
    return(b_x_chaincode, b_y_chaincode)


def convert_xy_frequency(xy, minxy, maxxy, minf, maxf):
    r = float(maxf - minf)/float(maxxy - minxy)
    f = (xy - minxy)*r + minf
    return (f)


def resample_list_frequency_duration(shape_list, list_freq, samp_freq, duration):
    res_buf = Buffer()
    real_duration = int(math.ceil(duration * list_freq)) / list_freq
    res_len = int(math.ceil(real_duration * samp_freq))
    t_samp_freq = 1.0 / samp_freq
    t_list_freq = 1.0 / (list_freq * len(shape_list))
    for i in range(res_len):
        res_buf << shape_list[int(round(i * t_samp_freq / t_list_freq))%len(shape_list)]
    return(res_buf)


# ==============================

# read file into buffers
b_x_chaincode, b_y_chaincode = read_chc_file_to_xy_buffers("C:\\Users\\user\\Documents\\shape\\shape\\04 star.chc")


# copy buffer to list so we can point to it by index
list_x = b_x_chaincode.toList()
list_y = b_y_chaincode.toList()


# generate a frequency modulated x and y signal
b_x_long = Buffer()
b_y_long = Buffer()

min_f = 40.0
max_f = 2000.0
sampling_rate = 44100.0
sound_pixel_duration = 0.01

min_x = b_x_chaincode.getMin()
max_x = b_x_chaincode.getMax()


if debug3: i=0

for x in b_x_chaincode:
    frequency = convert_xy_frequency(x, min_x, max_x, min_f, max_f)
    b_x_long << resample_list_frequency_duration(list_x, frequency, sampling_rate, sound_pixel_duration)


    if debug3:
        i = i+1
        p = int(len(b_x_chaincode)/5)
        if i%p == 0:
            b = Buffer()
            b = resample_list_frequency_duration(list_x, frequency, sampling_rate, sound_pixel_duration)
            b.plot(frequency)
            Plotter.show()


b_x_long.normalize()

min_y = b_y_chaincode.getMin()
max_y = b_y_chaincode.getMax()
for y in b_y_chaincode:
    frequency = convert_xy_frequency(y, min_y, max_y, min_f, max_f)
    b_y_long << resample_list_frequency_duration(list_y, frequency, sampling_rate, sound_pixel_duration)

b_y_long.normalize()

# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(sampling_rate, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile("C:\\Users\\user\\Documents\\shape\\shape\\04 star xy_freq_shape2.wav")

Sonification - sound of sand - 7

An interesting error - In this program I tried to map the shape to the frequency domain while preserving its shape in the amplitude domain. For each different frequency I resampled the shape. This yields an interesting result, both auditively:

And visually:
The error is in the quantization of frequency. I use this formula in the integer domain. The rounding from float to integer maps many different frequencies (especially the higher frequencies) to the same number of samples. This causes the "blockiness" of the spectrum: 


The sampling itself works nicely. For lower frequencies the sampling distortion is invisible. For higher frequencies the shape gets distorted but is still recognizable


This is how it looks in Audacity:

from Nsound import *
import math


def convert_chaincode_to_x(c,x):
    if c == 1 or c == 0 or c == 7:
        x = x + 1
        return (x)
    elif ( c == 2 or c == 6):
        x = x
        return (x)
    else:
        x = x - 1
        return (x)


def convert_chaincode_to_y(c,y):
    if c == 1 or c == 2 or c == 3:
        y = y + 1
        return(y)
    elif c == 4 or c == 0:
        y = y
        return(y)
    else:
        y = y - 1
        return(y)


def convert_xy_frequency(xy, minxy, maxxy, minf, maxf):
    r = float(maxf - minf)/float(maxxy - minxy)
    f = (xy - minxy)*r + minf
    return (f)


def resample_list(nr_samples, list):
    resampled = Buffer()
    for i in range(nr_samples):
        resampled << list[int(i*(len(list)-1)/(nr_samples -1))]
    return(resampled)


# ==============================

debug1  = False
debug2  = False
debug3  = True

# read a chaincode .chc file that has been generated by SHAPE
infile = open("C:\\Users\\user\\Documents\\shape\\shape\\tiny_test.chc")
instr = infile.read()
infile.close()
if debug1:
    print instr

# parse the input file - split it into words
inwords = instr.split(' ')
if debug1:
    print inwords

# delete anything except the chain code
i = 0
for str in inwords:
    if str.find('0E+0') > -1 :
        break
    i = i + 1
inwords = inwords[i+2:len(inwords)-1]
if debug1:
    print inwords


# fill the x and y buffers with the chaincode values
b_x_chaincode = Buffer()
b_y_chaincode = Buffer()

x = 0
y = 0
for str in inwords:
    c = int(str)
    x = convert_chaincode_to_x(c,x)
    b_x_chaincode << x
    y = convert_chaincode_to_y(c,y)
    b_y_chaincode << y

b_x_chaincode = b_x_chaincode - b_x_chaincode.getMean()
b_y_chaincode = b_y_chaincode - b_y_chaincode.getMean()

if debug2:
    b_x_chaincode.plot("x value from chaincode")
    Plotter.show()
    b_y_chaincode.plot("y value from chaincode")
    Plotter.show()


# copy buffer to list so we can point to it by index
list_x = b_x_chaincode.toList()
list_y = b_y_chaincode.toList()


# generate a frequency modulated x and y signal
b_x_long = Buffer()
b_y_long = Buffer()

min_f = 40.0
max_f = 2000.0
sampling_rate = 44100.0
sound_pixel_length = sampling_rate * 0.01

min_x = b_x_chaincode.getMin()
max_x = b_x_chaincode.getMax()

if debug3:
    i=0
for x in b_x_chaincode:
    frequency = convert_xy_frequency(x, min_x, max_x, min_f, max_f)
    nr_samples = int(round(sampling_rate / frequency))
    temp = Buffer()
    temp << resample_list(nr_samples, list_x)
    for j in range(int(math.ceil(sound_pixel_length/nr_samples))):
        b_x_long << temp

    if debug3:
        i = i+1
        p = int(len(b_x_chaincode)/5)
        if i%p == 0:
            b = Buffer()
            b = resample_list(nr_samples, list_x)
            b.plot(frequency)
            Plotter.show()

b_x_long.normalize()

min_y = b_y_chaincode.getMin()
max_y = b_y_chaincode.getMax()
for y in b_y_chaincode:
    frequency = convert_xy_frequency(y, min_y, max_y, min_f, max_f)
    nr_samples = int(round(sampling_rate / frequency))
    temp = Buffer()
    temp << resample_list(nr_samples, list_y)
    for i in range(int(math.ceil(sound_pixel_length/nr_samples))):
        b_y_long << temp
b_y_long.normalize()


# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(44100.0, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile("C:\\Users\\user\\Documents\\shape\\shape\\tiny_test_xy_freq_shape.wav")

Tuesday, May 15, 2012

Sonification - sound of sand - 6

Frequency domain - What we did with volume in the previous post we now do with frequencies. The python program below converts the X- and Y-components of the shape into varying frequencies. The input shapes are the same as in the previous example. You can hear the sound samples here:


Again the sounds are very technical and abstract. I'll leave it like that for the moment. I'm still learning about what I can do. You can see the spectrum of the signal as it appears in Audacity:

 
It is easy to see that the spectrum now has the shape of the X- and Y- components of the shapes.

Next time I'll try to map the shape to the frequency domain while using the original waveform and not a pure sine signal. To do this I'll have to resample the waveform. This is a bit tricky to program but should give a more interesting signal.

In the meantime I've found a lot of interesting articles about shape sonification. In the future I'll try to do things with the curvature of the shape. And I haven't done anything with the spectral components yet.

from Nsound import *
import math

def convert_chaincode_to_x(c,x):
    if c == 1 or c == 0 or c == 7:
        x = x + 1
        return (x)
    elif ( c == 2 or c == 6):
        x = x
        return (x)
    else:
        x = x - 1
        return (x)

def convert_chaincode_to_y(c,y):
    if c == 1 or c == 2 or c == 3:
        y = y + 1
        return(y)
    elif c == 4 or c == 0:
        y = y
        return(y)
    else:
        y = y - 1
        return(y)

def sine_duration_frequency(duration, frequency):
    g = Generator(44100.0)
    length = math.ceil(float(duration) * float(frequency))/float(frequency)
    return g.drawSine(length, frequency)

def convert_xy_frequency(xy, minxy, maxxy, minf, maxf):
    r = float(maxf - minf)/float(maxxy - minxy)
    f = (xy - minxy)*r + minf
   
return (f)

# ==============================
debug1  = False
debug2  = False
debug2a = False
debug3  = True

# read a chaincode .chc file that has been generated by SHAPE
infile = open("C:\\Users\\user\\Documents\\shape\\shape\\tiny_test.chc")
instr = infile.read()
infile.close()
if debug1:
    print instr

# parse the input file - split it into words
inwords = instr.split(' ')
if debug1:
   
print inwords

# delete anything except the chain code
i = 0
for str in inwords:
    if str.find('0E+0') > -1 :
        break
    i = i + 1
inwords = inwords[i+2:len(inwords)-1]
if debug1:
    print inwords

# fill the x and y buffer with the chaincode values
b_x_chaincode = Buffer()
b_y_chaincode = Buffer()
x = 0
y = 0
for str in inwords:
    c = int(str)
    x = convert_chaincode_to_x(c,x)
    b_x_chaincode << x
    y = convert_chaincode_to_y(c,y)
    b_y_chaincode << y

    if debug2a:
    
    print c
        print x

if debug2:
    b_x_chaincode.plot("x value from chaincode")
    Plotter.show()
    b_y_chaincode.plot("y value from chaincode")
    Plotter.show()

b_x_chaincode = b_x_chaincode - b_x_chaincode.getMean()
b_y_chaincode = b_y_chaincode - b_y_chaincode.getMean()
# generate a frequency modulated x and y signal
temp = Buffer()
b_x_long = Buffer()
b_y_long = Buffer()

soundpixel_length = 0.02
min_f = 40.0
max_f = 5000.0

min_x = b_x_chaincode.getMin()
max_x = b_x_chaincode.getMax()

if debug3:
    i=0


for x in b_x_chaincode:
    frequency = convert_xy_frequency(x, min_x, max_x, min_f, max_f)
    b_x_long << sine_duration_frequency(soundpixel_length, frequency)

    if debug3:
        i = i+1
        p = int(len(b_x_chaincode)/4)
        if i%p == 0:
            b = Buffer()
            b = sine_duration_frequency(soundpixel_length, frequency)
            b.plot(frequency)
            Plotter.show()

b_x_long.normalize()

min_y = b_y_chaincode.getMin()
max_y = b_y_chaincode.getMax()
for y in b_y_chaincode:
    frequency = convert_xy_frequency(y, min_y, max_y, min_f, max_f)
    b_y_long << sine_duration_frequency(soundpixel_length, frequency)
b_y_long.normalize()

# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(44100.0, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile("C:\\Users\\user\\Documents\\shape\\shape\\tiny_test xy_freq_chaincode.wav")



Sunday, May 13, 2012

Sonification - sound of sand - 5

First sonification results - I've managed to sonify my first experimental shapes. The results are encouraging. They are not very musical yet but with some optimization they might get interesting.

Small random test shape - We use the small experimental test shape for our first sonification. It has been used in my previous blog post so we're quite familiar with its properties:
The python software (see below) first extracts the X and Y values that we get while traversing the outline in a counterclockwise direction. It transforms these values into two sound waves:
Then it concatenates these sound waves (they are extremely short at a sampling rate of 44100 Hz) into a longer sound sample and it modulates the amplitude of this signal using the same sound shape.
This is a nice fractal twist and it feels like a very natural thing to do with the signal. This way the signal is made self-similar on two levels. (I don't think it would be feasible to add more than two levels of self-similarity, the signal would get too long.) 

Then we put the X and Y signal into the left and right channels of an audio stream. Again this feels like a very natural thing to do with the signal.

Big star - In the same way we generate a sound sample for the star shape of one of the previous experiments. This gives comparable results.
Notice how recognizable the X and Y values are in the signal. In the X-direction the star has two points. In the Y-direction it has only one point.

Discussion of the results - You can listen to the resulting sound files here:


The outline of a shape has been mapped directly to a 44100 Hz sampling rate. This means that a smaller shape will generate a higher note and a shorter sample. For the moment I will leave it like that because it's the most natural mapping. Later I will explore other possibilities. This means that the test shape produces a high mosquito like drone. And the star shape produces a low atmospheric, almost inaudible soundscape. Both sounds are quite abstract and unmusical and this is how it should be for the moment.
This is how the two sound files look in Audacity:
test shape
star
And here you see how the signal-in-signal looks in audacity if you zoom into the details.

Now we've used the amplitude domain to map shapes into sound. I'll also try to use the frequency domain for this mapping.

Python program - I'm not sure this will run correctly if you copy it directly into your Python environment. I'm using Python-XY and this has all the necessary modules pre-installed. And Blogger may destroy some of the whitespace. So this could explain some unexpected bugs.
If I could do things in a more Pythonesque way then I'm open for comments.

from Nsound import *
debug = True

# read a chaincode .chc file that has been generated by SHAPE
infile = open("C:\\Users\\user\\Documents\\shape\\shape\\04 star.chc")
instr = infile.read()
infile.close()

if debug:
    print instr


# parse the input file - split it into words
inwords = instr.split(' ')
if debug:
    print inwords


# delete anything except the chain code
i = 0
for str in inwords:
    if str.find('0E+0') > -1 :
        break
    i = i + 1

inwords = inwords[i+2:len(inwords)-1]
if debug:
    print inwords


# fill the x and y buffer with the chaincode values
b_x_chaincode = Buffer()
b_y_chaincode = Buffer()

x = 0
y = 0
for str in inwords:
    c = int(str)


    # convert a chaincode into a plot of the x value against time
    if ((c == 1 or c == 0) or c == 7):
        x = x + 1
    elif ( c == 2 or c == 6):
        x = x
    else:
        x = x - 1
    b_x_chaincode << x


    # convert a chaincode into a plot of the x value against time
    if ((c == 1 or c == 2) or c == 3):
        y = y + 1
    elif ( c == 4 or c == 0):
        y = y
    else:
        y = y - 1
    b_y_chaincode << y


b_x_chaincode = b_x_chaincode - b_x_chaincode.getMean()
b_y_chaincode = b_y_chaincode - b_y_chaincode.getMean()

if debug:
    b_x_chaincode.plot("x plot from .chc file")
    Plotter.show()
    b_y_chaincode.plot("y plot from .chc file")
    Plotter.show()


# generate an amplitude modulated x and y signal
b_x_long = Buffer()
b_y_long = Buffer()

for level in b_x_chaincode:
    b_x_long << b_x_chaincode * level
for level in b_y_chaincode:   
    b_y_long << b_y_chaincode * level


# normalize to prevent clipping of the output signal
b_x_long.normalize()
b_y_long.normalize()

if debug:
    b_x_long.plot("x plot from .chc file")
    Plotter.show()
    b_y_long.plot("y plot from .chc file")
    Plotter.show()


# make sure that the sound sample is long enough to hear anything
while len(b_x_long) < 200000:
    b_x_long << b_x_long
    b_y_long << b_y_long


# code the x and y signal into the left and right channel of an audio stream
# write the audio stream into a .wav file
a = AudioStream(44100.0, 2)
a[0] = b_x_long
a[1] = b_y_long
a.writeWavefile("C:\\Users\\user\\Documents\\shape\\shape\\04 star xy_chaincode.wav")

Friday, May 11, 2012

Sonification - sound of sand - 4

Chain coding - What does the SHAPE software do with a shape? It is interesting to get a clear understanding of what's happening. The transformation of a shape into a fourier spectrum goes in two steps. The first one is chain coding. The software determines the pixellated outline of a shape and transforms it into a chain-coded string of numbers. Each step along the contour is translated into a number in this way:
 For example the leftmost edge of the pixellated shape above is translated into this number string:

...... 4 4 4 4 4 4   5 5 5 5 5 5 5   6 6 6 6 6 6 6 6 6 6 6 6 6 6 6   0 0 0 0 0 0 0 0 .......

We can use this a the starting point for a very direct, very primitive kind of sonification. Any shape can be decomposed into pixel-sized increments in the X and Y directions. Using ChcViewer.exe it is possible to visualize the X and Y components of a 2-D figure. For example if you start going counterclockwise from the position of the arrow:


Then you get this plot of the X-values of the shape as you go anticlockwise around its contour:


And this plot of the Y-values of the shape as you go around its contour:


You can see immediately that these shapes can be transformed into soundwaves easily. And we have a lot of degrees of freedom while combining the X and Y waveforms: we can add them together in different ratios and we can time-shift them with respect to each other.

Next time we'll see if we can get some sounds using python and nsound.

Note
You can make an X-plot by substituting these values in the chain code:
1,0,7 => 1
2,6   => 0
3,4,5 => 7

And an Y-plot by substituting these values:
1,2,3 => 1
4,0   => 0
5,6,7 => 7

References
Shape software - http://lbm.ab.a.u-tokyo.ac.jp/~iwata/shape/index.html
Python XY - http://code.google.com/p/pythonxy/
Nsound (included in Python XY) - http://nsound.sourceforge.net/users_guide/index.html

Monday, May 7, 2012

Sonification - sound of sand - 3

Testing - To get a feeling for elliptic fourier analysis with the SHAPE software I made a set of test shapes:
Then I determined their outline with ChainCoder.exe and  calculated the elliptic spectra with CHC2NEF.exe. The I made a plot with OpenOffice Calc. These are the results with some analysis:

Note: Below I've replaced the primary component of 1.0 with 0 otherwise the "harmonics" would be totally invisible. That's why you see nothing in place 1.
Circle - Theoretically a perfect circle should not have a spectrum. It should only have the first component. But my hand-drawn digtized circle made in MS-paint is not perfectly symmetrical and it has rough edges. So there are still some "harmonics" but these are much fainter than the harmonics of the other shapes (a factor of 100: 10^-3 instead of 10^-1). I assume that I'm just seeing "random noise" and "quantization noise" in this spectrum.
Triangle - One would expect the order-3 harmonics to be dominant for a triangular shape but that is not the case at all. There is no immediately visible correlation between a figure and its spectrum. This is even more obvious if one looks at the shape as more harmonics are added. The second harmonic is already sufficient for a nice triangular shape. I edited the .nef output files by hand and plotted them with NefViewer.exe.
Note: Below I've replaced the primary component of 1.0 with 0 otherwise the "harmonics" would be totally invisible. That's why you see nothing in place 1.
But if we look at the spectral components then we see that 10 harmonics is not really sufficient for a nice sharp triangle. We need at least 20 harmonics.
Star - Here you would expect that the order 2, 3 and 6 harmonics would be dominant. And you would expect that the spectrum of the triangle looks limilar to the spectrum of the star. But things are not that intuitive.
Square - Here there's a surprising similarity between normal fourier spectra and 2-D fourier spectra. A square wave spectrum has only odd harmonics. And this 2-D spectrum has also only odd harmonics!

Rectangle - There is a lot of similarity between the square and the rectangle. Also only odd harmonics, but the signs and ratios of the harmonic components are different from the square. It is interesting to see how the rectangle is constructed from the different harmonics.