Now it seems that I've been doing experiments in music forever, since the first time I connected my MIDI keyboard to a computer and a software that was installed on 12 floppy discs at the time.
The software was called Voyetra orchestrator plus btw, a very non-practical to use, but still the best I could get in the time when you could hardly buy normal food in Serbia let alone musical software, so my dad bought it for me in Greece..Anyway, since the experiments with MIDI composition at my early age, one of my biggest personal breakthroughs in music happened when I started to think about connecting music and drawing - a field that I would later discover to be called visual music. Since the composition lessons at the Belgrade University of Art didn't fulfill my demands for experimentation or obviously not in the way that was interesting for me, I turned my little apartment into a laboratory where I was daily trying out things using my modest equipment..which was just good enough to begin with though. Basically what I needed was not more than a pen, paper and a solid microphone.
At the time I was improvising with musicians from Belgrade gathered under the name Addlimb, a very talented group of people interested in free improvisation. In one of our sessions, I wanted to use the interface that I found very much interesting (see the photo above). First thing that amazed me when I was listening to the drawing sounds, was this similarity between the shapes on the paper and the musical form. When I was drawing a circle repeatedly, the sound I was getting was continuous just as the circle is, and when I was drawing a line, repeating the same move from left to right, what I got was a short "rhythmical" model that had beginning and the end the same way the line has it. Then this process got another dimension when I incorporated the Kaoss pad into the setup (sound effect processor with a touchpad). Paralleling the moves of the pen, with those on a touchpad that I made with another hand, I realized that adding the color to the sound (when I added distortion for example to the drawing sound, and moved my hand on the touchpad on X and Y axis changing the characteristics of the sound effect), it was as if the surface of the paper had certain acoustic properties that my pencil (by moving through it), was discovering and making audible. As I liked very much the sonic outcome of the combination of shapes I was drawing and the sound effects added to them, I was thinking of the way to notate this in order to be able to repeat it later (in a session or recording). Here is an unfinished example of the score that I used:
Version 2.0 - going (almost) digital :)
Very soon I realized that only if I exchange paper for an existing digital interface that I had, and that was a graphic pad, I could combine the touchpad of the Kaoss effect processor with it and wont have to simulate the movement of one hand with another, simply one surface and one pen would work much better. Although MIDI and OSC interfaces of this kind were widely known and used at this time, I didn't have the right source of information at my disposal and instead of using my graphic pad as a MIDI controller that would play a VST instrument and whatever sound affect I need, changing the properties of it as the stylus moves over the pad's surface, instead I did something more primitive. But now I actually think it was cute how I thought of it:))
What I did was - I dismantled my brand new Kaoss pad and glued its touchpad to the surface of a graphic tablet. So now, when I started drawing in a software that came along with my cheap grahpic pad, at the same time I would have the amplified sound that goes straight to an effect, which makes it more audible, and the sound would follow the graphical representation in realtime. And the new interface looked like this:
As a result, I was able to have a small realtime audio-visual experiment, I was able record it and to observe my newest finding by watching and listening:
This experimentation of mine led me to learn about amazing works from the past, and as a biggest influence appeared the UPIC by Xenakis. Following that line of thinking, I also tried using the Iannix software, graphical realtime sequencer. I read a lot about the combinations of music and animation and I was fascinated with the work by Fischinger, but as the most interesting thing of it all appeared the early experiments by the Russians. I heard more about this in the lectures by Andrey Smirnov at Sibelius Center for Computer Music and Technology some years later during my graduate studies. Then came the Oramics etc. etc. etc.
Taking in regards all this, it is no wonder that I made what I made at my first ever Pure data class (that was also my first class at all at the Media Lab in a Master program Sound in New Media that I enrolled in 2008). No wonder that the first interactive thing that I ever actually made was a realtime sound and graphic patch that was controlled with the use of a graphic tablet:). A very short excerpt from my presentation: