Sunday, November 18, 2007

Instrument Overview

My instrument is called "The Megaphonator" and is based on the concept of
changing a well known sound by amplifying a Piezzo mic recording a basic speaker
signal. The signal is taken from an Ipod and is played on two low quality
speaker systems (varying in timbre), and a Piezzo mic is moved around these to
produce feedback, distortion, or other effects in a purely analogue fashion. One
of the speaker systems is made out of two stereo channels and allows the Piezzo
to be placed in a way that lets it pick up equal ammounts from the left and
right channels, which cuases a feedback effect that seems to rise through the
harmonic spectrum. The resulting sound is played through the Daphon amp we got
given. The circuit for this transition is encased in a black box and makes use
or bread boarding. Through the process of manually moving the Piezzo around the
mic, or just touching it very softly, a Victorian Synth feedback effect can be
observed (made more obvious when we add rice or other small particles to the
speakers) and the outer casing works not only to protect the fragile components,
but is also cleverly built to allow maximum flexibility in the arrangement of
the two speakers, the Piezzo and the amp. The outer casing, in addition to bread
boarding and the Victorian Synth concept, makes use of a third concept we got
taught.

Note: the most interesting sounds on my instrument are usually at a level that
is too soft to hear in the heart of the improvisation sessions, due to the high
output of some of the other instruments, which is why I tend to turn it up and
sacrifice some variety for amplitude in such situations.

It was very fun creating and improvising with the instrument, and I learned a lot about electronics from it. It also taught me how to improvise as part of a group, as I usually work as a solo artist.


Here are some photos:


http://www.box.net/shared/ch5ai0btti

http://www.box.net/shared/kqeimf8p9f

http://www.box.net/shared/erfrapm4q1

Super Collider

Dear ligs

For those who are interested here is my super collider code and mp3. Enjoy!

// Buffer

b = Buffer.read(s, "Sounds/Soprano Piano.wav");
c = Buffer.read(s, "Sounds/Techno.wav");

// Synthdef 1

(
SynthDef(\bufGrain,{ arg out = 0, bufnum, rate=1.0, amp = 0.1, dur = 10, freq = 10, pPos = 0.5, startPos=0, reverb = 0;
var signal;
signal = PlayBuf.ar(1, bufnum, rate, startPos) * EnvGen.kr(Env.sine(dur, amp), doneAction: 2);

OffsetOut.ar(out, ({CombC.ar(signal * 0.2, 0.01, XLine.kr(1, 0.01, 0.001), reverb)}) ! 2)
}).load(s)
)



// -------------------------------------------------------------------------------
// GUI & GRAIN CREATION
// -------------------------------------------------------------------------------
(
// Variables
var win,
sliderData,
controlArr,
time = 0, // Start Time
totalTime = 100, // Total Time
thisGrainDur, // Grain Duration
message, // SynthDef Message
wait,
sf,
offset,
activity,
deviationmult,
dutycycle; // Interval

// Build Slider Data
sliderData = [
// Label Min Max Init Param
["Grain Rate", [-1.0,25.0], 1.0, \rate],
["Grain Amplitude", [0.0, 1.0], 0.1, \amp],
["Grain Duration", [0.0, 3.0], 0.5, \dur],
["Grain Start Position", [0.0, 100000.0], 0.1, \startPos],
["Grain Decay", [0.0, 5.0], 0.0, \reverb]
];

// Build Control Data Array for Temporary GUI Information
controlArr = Array.fill(sliderData.size, {Array.newClear(5)});

// Window Setup
win = SCControlWindow(
name: "Granular Synth 1",
bounds: Rect(64,0,600,400)
).front;



offset=DDSlider( win, Rect.new(50,240,200,40), "offset", 0.0, 10.0, 'linear', 0.01, 0.0);
activity=DDSlider( win, Rect.new(50,310,200,40), "activity", 0.0, 5.0, 'linear', 0.01, 0.0);
deviationmult=DDSlider( win, Rect.new(280,240,200,40), "deviationmult", 0.0, 5.0, 'linear', 0.01, 1.0);
dutycycle =DDSlider( win, Rect.new(280,310,200,40), "dutycycle",0.0, 2.0, 'linear', 0.01, 1.0);
// Build Control consisting of Label, Min NumBox, Range Slider, Max NumBox,
sliderData.do({

// Arguments
arg val,
idx;

// Feedback
val[1].postln;

// Variables
m = idx * 30; // Multiplier to layout GUI

// Label
SCStaticText(
parent: win,
bounds: Rect(5, m, 100, 20)
).string =val[0];

// Min Number Box
controlArr[idx][0] = SCNumberBox(
parent: win,
bounds: Rect(110, m, 50, 20)
).value_(val[1][0]);

// Max Number Box
controlArr[idx][1] = SCNumberBox(
parent: win,
bounds: Rect(480, m, 50, 20)
).value_(val[1][1]);

// Create Button

(p = SCButton(win,Rect(20,168,220,50));
p.states = [
["Pick a Granulation File"],
["Pick a Granulation File", Color.white,Color.black]];
p.action = {

(
CocoaDialog.getPaths({ arg paths;
paths.do({ arg u;
b = Buffer.read(s, u);

u.postln;
})
},{
"cancelled".postln;
});
)

});

// BBCUT!


sf = BBCutBuffer("sounds/Electro.wav",16);

// Button 2

(p = SCButton(win,Rect(290,168,220,50));
p.states = [
["Pick a BBCut File"],
["Pick a BBCut File", Color.white,Color.black]];
p.action = {
(
CocoaDialog.getPaths({ arg paths;
paths.do({ arg t;
Routine.run({

sf = BBCutBuffer(t ,16);

s.sync;
// 3.33bps= 200 bpm
BBCut2(CutBuf2(sf, offset, deviationmult, dutycycle),SQPusher1(activity)).play(3.33);

});
t.postln;
})

},{
"cancelled".postln;
});

)});

// Create Range Slider
controlArr[idx][2] = SCRangeSlider(
parent: win,
bounds: Rect(170, m, 300, 20)
);

// Slider remapping from 0.0 - 1.0 to .. - ..
controlArr[idx][3] = val[1].asArray.asSpec;

// Slider Action
controlArr[idx][2].action_({

// Change Number Box Value
controlArr[idx][0].value_(controlArr[idx][3].map(controlArr[idx][2].lo.value));
controlArr[idx][1].value_(controlArr[idx][3].map(controlArr[idx][2].hi.value));

});

});

// Routine
r = Routine.new({

// Do function
inf.do{
// Arguments
arg item;

// Variable
// [controlArr[0][0].value, controlArr[0][1].value].postln;

message = [
\freq, rrand(controlArr[0][0].value, controlArr[0][1].value),
\amp, rrand(controlArr[1][0].value, controlArr[1][1].value),
\dur, thisGrainDur = rrand(controlArr[2][0].value, controlArr[2][1].value),
\pPos, rrand(controlArr[3][0].value, controlArr[3][1].value)
];

// Instance
(

Synth(\bufGrain, [ \bufnum, b.bufnum,
\rate, rrand(controlArr[0][0].value, controlArr[0][1].value),
\amp, rrand(controlArr[1][0].value, controlArr[1][1].value),
\dur, thisGrainDur = rrand(controlArr[2][0].value, controlArr[2] [1].value),
\startPos, rrand(controlArr[3][0].value, controlArr[3][1].value),
\reverb, rrand(controlArr[4][0].value, controlArr[4][1].value)

]));


// Duration and Interval
wait = thisGrainDur * rrand(0.1,0.2);
wait.wait;
};
});
AppClock.play(r);
)



Mp3 is here:



http://www.box.net/shared/68s8dx9k92

Tuesday, August 7, 2007

Forum Week 2

This week forum was a journey to engineering. Not to say that past forums were not a journey to engineering, since random bleeps and squaks are closer to this field than music. This week, however was a physical journey to the engineering space in an effort to maximize our tech studies efficiency. Who would have thought that a music course would lead to the path of manual labor? Only those who denny the pure vision of Cage and his followers.

We got to wear goggles which was hot. Unfortunately, nothing hit them in the process so it was a waste of time and equipment. We learned how to solder wires together, and this is a problem I will not have in life from now on. We also learned how to wire up a pizza mic to a male jack and plug this in speakers. The results were musically... the results in an engineering sense were successful, IE it made a sound. We did not get to finish making these since we ran out of time but I cannot wait until next week when I will be making beeps to rival those at a pedestrian crossing (fingers crossed! hahaha).

Tuesday, July 31, 2007

Australian Music Presenter Reviews:

Paula Rosenbauer was the first of a number of weekly speakers who will talk to us about certain aspects of the music industry in Australia. She was a representative from APRA (Australasian Performing Right Association), a company which "collects and distributes royalties for songwriters, composers and publishers through licensing agreements with music users. APRA currently has over 41,000 members but represents over a million songwriters worldwide through it's affiliations with overseas societies". I copied that from the introduction on the course outline we were handed out, but as you can see the "it's" in the last sentence is wrong, because "it's" is an abbreviation for "it is". A better word in that place would have been "its". Having decided to copy most of the paragraph word by word, however, I found it hard to resist the temptation of also copying the mistake and later pointing it out. That later is now, and I pointed it out in the last sentence as well as this one. I will now cease to point out the mistake for the remainder of the review.

Paula explained that in Australia, APRA takes care of protecting the rights of music makers and performers by monitoring the industry and making sure they are paid all the royalties they should be. While in Australia copyright exists as soon as you put an idea into a physical form (which can be as little as writting some notes on a page) it is hard for individuals to know if their rights are being infringed. Basically it is good to be a member if you want to earn royalties from your music in any way.

FACT!!!

1 in 400 Austarlians is an APRA member. So if you walk down Rundle Mall you might see 1 or so. You can then proceed to interact with him/her. Females can also apply to be APRA members; there are currently no gender restrictions.

Sunday, July 29, 2007

Semester 2 Week 1 Year 3

This blog will talk about the week.

This semester started off well since most of the classes so far show potential of developing into subjects with content that can be useful to me in the near future. The Australian music subject is a good example of this since it seems we will be learning a lot about how the industry works, where the money is and how to get it. The movie sound subject is good because it is a skill I want to know, and we are also required to make our own movies which is also useful. Overall, most classes show a lot of promise. Forum is... unique still.

In the movie subject we had a look at Luke Harrold's honours project, a witty short film describing the monotony of factory work. Having worked in a factory myself, I can sympathize with Luke but cannot understand how one can work full time in such a place; it is hell. The sounds were done on an early Pentium but sounded decent. They were recorded individually and placed over the images after. There was hardly any sound from when the scenes were actually shot. The one thing I felt was missing was a sound for the lentil bags falling on top of each other, but apart from that it was well synchronized. We were also given some sheets to read including a list of some movies which have well worked sound. I rewatched one of these movies (Raging Bull, with Robert De Niro) with the sound in mind and noticed some nice passages. The title sequence is especially effective, with soft orchestral music playing to images of a hooded boxer shadow-boxing in a ring. Apart from the "oldschool" fighting scenes that did not really impress the movie is a very good one.

In forum we got introduced to the concept of making instruments by hand. This is not really my cup of tea, but then again forum hardly ever was. I can see how there are lessons to be learned here, how knowing how to make sound with some batteries and a speaker can be good to know, but the sound quality of these "instruments" (electric circuits) is poorer than even the most basic 1960 synths. We got shown a demonstration of some of the concepts in action and I was not sold; it sounded like PC speaker gone horribly wrong, and PC speaker is pretty wrong. This time could be used to analyze successful (oops the S word, that is NEVER to be used in an academic environment) music, but of course this is not what an academic environment is meant to teach. How then, I wonder, are we ever expected to become successful musicians? I have not heard the word chorus mentioned once in a lesson and I feel sorry for those who intend to go down the musician or producer path. Or maybe we are NOT meant to be successful musicians? Maybe the academic environment is meant for those with an academic industry interest? Hmmmmmm...

However, as previously mentioned, the semester shows a lot of potential for developing into a useful one, but forum is still questionable.

Tuesday, June 26, 2007

I'm colliding baby, i'm colliding

Want to hear winchimes but live in an area with no wind?

Want to hear music but sick of tonal chord proggressions?

Want to hear some sinewaves?

WANT TO LONGER!


Is it a bird? Is it a plane? Or is it...



The Colleidoscope! On a Collision course to COLLIDERTOWN! Now with collidervision and sound, in 1.1 mono (and with no visual). Prepare to be collided... the oldschool way.

In the beggining people were just colliding by hand. It was a slow, tedious process that led to much frustration and wasted energy. Then, somebody had the idea of making colliding a digital process and invented Collider, a sequence of digital processes that synthesizes colliding inside the computer! Collider made an impact, as expected, and its owner Bill Tocollide became a billionare within hours of its release. But something was wrong... while the program was colliding, it was not SUPER colliding. This problem was addressed in 1987 when Sony, Microsoft and BMW (SMB for short) joined forces in an effort to create the next logical step in Collider's evolution. With plently of resources and the leading scientists at NASA leading the charge, Super Collider 1 was finally released. To say it made an impact on the music world would be like saying John Cage was only a "pretty cool" guy. The shockwave experienced after its release changed music forever, like Cage changed his clothes every day. And boy, were they some clothes...

SMB enjoyed the financial compensation for this epic discovery, but the tensions between segments within the company grew. Those more interested in music and the entertaining power of Supercollider wanted to build a collider that could not only collide, but be enjoyed by the whole family in 3d graphics plugged into a TV. They wanted to release "Colliding station". The programers in the company wanted to release "Collider '95", a Colliding program that would come with every new computer and feature a free internet browser, "Collide Arouser". The third section of SMB wanted to make high quality European cars instead. So the company split appart, into Sony (who went on to create the Playstation based on its colliding concepts), Microsoft (who created Windows 95 and Internet Explorer as a way to share colliding projects with other enthusiasts around the globe) and BMW, who went on to make European cars.

The company that took over was Machintosh, a contaminated apple company somewhere in France. Machintosh wanted a collider that could not only Supercollide, but Supercollide 2. After many failed attempts Super Collider 2 was finally released.

Its welcome was as expected, and even cinemas sold out their special "SuperCollider 2" 3 hour previews. In 1991 an enthusiastic John Cage got hold of the SuperCollider 2 concepts, and, in an effort to mix the new, digital Supercollider 2 with original analogue concepts of colliding by hand, created Supercollider 3. The new collider offered unprecedented sound quality, and due to its construction could make sinewaves as pure as John Cage's concepts. It also included a random function.

Supercolider 3 is often taught in musical institutions as an alternative to colliding by hand, or composing real music.

It is with this program that I made the following piece:

http://www.box.net/shared/c4si2hu49v



The work uses 4 sinewave synths that either add notes above or bellow the note currently playing. The choice it makes of the next note is a weighted random value, so for example in the semitone synth there is a 25% that the next note will be a semitone up or down, 17% that it will be 2 semitones up or down etc. up to about 5-6 notes. There is also a chance the note repeats. In addition, the longer the synth has been playing the more chance the notes have of not playing at all, or playing (if the synth started silent). The song uses these 4 random note synths layered on top of each other (towards the middle you will hear about 7 semitone synths playing almost 10 notes a second each). The synths also play at different speeds.


May this collide with your soul as my soul collided with this program.


Also check out

http://uncyclopedia.org/wiki/Supercollider

My contribution to uncyclopedia.

Cheers to Ben for pointing it out to me!

Sunday, June 3, 2007

Mastering Exercise

http://www.box.net/shared/anxmm0mdh5
(mastered)

As opposed to:

http://www.box.net/shared/mytyldcb9q
(original)


This is the result of the mastered version of my song. I am fairly happy with it, considering when I started I did not expect to get anything too solid from the process. But I think the file selecting had a lot to do with it too. Because it was so weak, it was easy to EQ and compress without distorting, something that cannot be said for my previous master attempt. Because it was totally electronic, it was also free of distortion, and this made it unnecessary to try and iron out "mistakes" from the recording process.

Basically I made 2 copies of the file and treated both fairly separately. In one I boosted the low end as well as a narrow Q around the 100 hz mark for the bassdrum, as well as the mid highs and highs. I also compressed and stereo imaged it a bit (narrowed the highs and left lows alone). It sounded powerful but lacked the snare punch. For this, I treated the second copy. I EQ'd the mid highs and mids to around the area of the snare, but was carefull to take out a narrow band where the a synth stood (around 1000 hz). I also compressed it a lot harder so the snare stood out more.

In addition to these 2 tracks, I also mixed in the original track at a lower volume to try and give it a bit of life. The main issue with 3 identical tracks is phasing but because the tracks were treated so differently, this is not too obvious to my ears, and the powerful sound more than makes up for the small amount of phasing that could be present. I think it's a decent attempt, not fantastic but definately better than my last and somethign I'm happy with. Enjoy!

Wednesday, May 30, 2007

Piece that we have to master

This week we have to master an electronic piece and luckily we are able to try it on one of mine. The piece I chose is fairly weak sounding on purpose and lacks a lot of punch, however might be able to be pumped up a bit with mastering. I haven't started mastering it before I posted this up so this is where I'm starting from too:

Enjoy! :D

http://www.box.net/shared/mytyldcb9q

Tuesday, May 29, 2007

Week 11 Forum Review

Well, deconstruct me a constructor, we had another week of deconstruction talks. I found the deconstruction element to feature more widely in these talks as opposed to construction. What do people have against construction? Is it because my house is next to one getting built and you don't want to offend me? Or is it because we attend the CON servatorium, and we feel we need to make up for the missing DE in our lives?

Either way, Simon Whitelock was first. I don't understand the "DJ" movement, and I have to say I never heard of my mentor using such commercial equipment. All cage needed was a bowl and some dice, as well as 1-2 well prepare mushroom meal ideas. Instead, we are hearing talks about people stealing ideas from other people (JACKONSTRUCTING) and CONSTRUCTING a new work from them. Is this a good idea? All I know is that if it's not random it's not worth doing. The devil's work, such as DJing is trying to alure you into a world of magical logical harmony and constant beats. The chance of the bassdrum comming up on exactly the first crotchet of every bar if you are randomizing it to semiquaver levels is 2^16, and the fact that this occours throughout the piece PROVES it is not random. Also, tonality seems to live and breathe through this music and this is somethign we need not encourage. Free yourselves. Take a break from this jibberish, I say, take a chance and roll the dice.

Nathan Shea was next, and the presence of strong noise content in poor quality recordings was a welcome change from the bestiality we had been subjected to. I could relate this to my own interests and if the guitars had been replaced with a sinewave modulated by the drums which could be teapots, we would be on to something.

Last and best, John Delay. A man among men. A trooper against adversity. The real deal. If we had a war, John should be the commander in chief. A veteran of rhythm and a pariot of harmony. John Delay is all this, and was even more in his eye opening presentation. Any music with little percussive content and slow constant change is to be appreciated, but the examples he played gave me a tingling in my ningling I hadn't experienced since "4'33: Live aus Berlin" came out. A wonderful expansion of the senses, and one that made sense. What a way to end the day, and what a day it was.

Sunday, May 27, 2007

Week 11 Audio Arts

This week we had to master a stereo file we previously mixed. I chose to use this mix:
http://www.box.net/shared/rsj69b7gji
(last week's submission)



And here are the 3 mixes I tried doing. The 2nd might be the best but I'm not happy with any.

Mix 1: http://www.box.net/shared/qdk3812ban (Too much high frequency adds to the peaks... however, drowns them out in a weird way and is less obvious than I thought it would be)

Mix 2: http://www.box.net/shared/8viqk1ea44 (Better than mix 2, less highs, more compression. Piano sounds less natural than I would like and a lot worse than the original, however, so by taking the annoying 15,000 frequency out I killed the piano sound)

Mix 3: http://www.box.net/shared/42u0xoemnb (Highs are reduced then amplified with a 2nd EQ later on... Result is annoying peak is still there. Sounds more open than 2nd mix but peaks are a lot more obvious)

I found it very difficult to get a decent result. I tried using a lot of different EQ and compressor settings but the fact that the recording was not perfect (had some peaks) was accentuated with every attempt. This was especially evident in the piano. In addition, the mixed version (not mastered) sounded solid to me and I could find little to actually fix. The drums were probably the weakest instrument, and the bass could have a little more volume higher up (around 100-150hz) but appart from that I was very happy. The mixing was mainly an effort into taking the peaks out and compressing it a little. I also wanted to give it just a little more top end and bass, but not much. However, whenever I took out the piano peaks (which sit around 15,000 hz) by EQing them heavily, the overall sound suffers a LOT especially in the piano and drums (snare mainly). To make up I tried adding a 2nd EQ to boost the overall high frequency range (over 10,000) after taking out the 15,000, but the result was still flat. Also I found no way of boosting the drums or making them sound better. Basically here are 3 attemps I went through, but I still preffer the original (especially since the addition of any EQ's either flattens the sound by taking the peaks away or makes them even MORE obvious). Compression also adds to the peaks. Overall I tried a lot of different things but would like to learn more about this topic before I am happy with my skills. Luckily we're doing more next week!

Tuesday, May 22, 2007

Week 10 Forum Review

Last week's forum was the best because I got to do a presentation. Everybody thought I was super cool, and I totally was. It made everyone feel happy and eager to listen to more talks. My talk was about Construction and Deconstruction, a topic I suggested we study because I saw great learning potential... was I ever right! From the moment I started talking, I could see in people's eyes a glowing, growing passion for learning. From my talk they learned about construction in musical pieces (introduction), deconstruction in musical pieces (the end bit) and thousands more topics. It was great to finally be able to get out there and publicly express these burning feelings about construction and deconstruction. This could have been the perfect topic for me for a number of reasons. Firstly, I've been constructing all my life. I've been constructing more cells as I grew, I've been constructing musical works, and I've especially been construction an eagerness to get these feelings out about construction and deconstruction. To my own credit, I've also been deconstructing a lot through my short career: deconstructing plates when I broke them, deconstructing nuclear reactors, and deconstructing food through a powerful but effecting digestive system. If I had more time, I would've deconstructed a lot more but for my young age this is a start... In the future, I look forward to constructing a lot more opportunities for me to deconstruct this topic for young audiences. This, hopefully was the start of many such opportunities.

There were also other presenters present, who also presented. They also chose my topic.

Matt Mazzone presented about an add he wrote music for and "deconstructed" his process of "constructing" it. I was surprised at the level of professionalism Matt displayed in his choice of sound and music. It fit the images very well, went along with the story but most importantly: was NOT INTRUSSIVE. This shows a mature understanding of the add as a whole, and you can tell Matt didn't try to write a masterpiece, but rather, complement a video. The music and sounds were well chosen and worked well, and his other adds were also good. Good stuff!

Frederick May also presented about popular music and how its systematic construction can effectively make it a hit song or not. He went on to analyse (DEC on Struction) some hit songs, and made some generalizations about number 1 singles, and why they work. He claimed that every number 1 single shares a number of factors that make it stand out, and every B side that accompanies those lacks those exact elements. Although this is probably true to an extent, it is impossible to think there are exceptions. For example, what if a B side makes it to number 1 in a country but nowhere else? Does that mean that country is wrong? Does that song HAVE the features of a number 1 or not? Frederick, however, made a lot of interesting points about how these songs are constructed and played some music to back up his claims. He was a good public speaker and fun to listen to, but the generalisations were a bit too generalisatory (told a general story, or told a general a story, or story about a general).

Generally speaking, the presentations were presented well, and everybody in the audience had a great time. I am looking forward to next week's speakers, who will deconstruct this topic for us a bit more.

Music Technology Forum Presentation, EMU Space. Lecture presented at the University of Adelaide, 17th May 2007

Tuesday, May 15, 2007

Week 9 Forum Review

This week the diploma students presented talks on famous producers and learned about their techniques. Fortunately, I was part of a different group, one which formed the audience to Tristan Louth-Robins' presentation on his masters project. To say I was spellbound would be to overstate my feelings, seeing that a number of factors severely detracted from what could have been an enlightening talk.

While Tristan was presenting his concepts, my mind drifted back to my first experiments of similar nature, and an overwhelming sense of nostalgia filled my sinuses. I was 7 when my first encounter with the avante garde poked its head. Having noticed there is a discernible difference in sound quality between having headphones on my head and someone else wearing them, I decided to experiment with this phenomenon. My attempts were, indeed, naive, but these same concepts seem to drive me today. I remember getting my hand on all the headphones I could, then placing them on a tree in a spiral pattern so that each ear speaker is enveloped in a length of the adjacent pair's chord proportional to the diameter of the speaker cone combined with an algorhythm vaguely combining pi and the golden ratio (give or take a cent or 2... I was only 7). When music is played from a nearby fountain, the frequencies tend to align themselves harmonically, which could be displayed visually if you shined an ultraviolet beam through the water and onto a water colored fish tank enveloped in a thin layer of silver coated aluminum. There was some sort of movement in the water when there was wind, so I proclaimed the experiment a success.

Fast-forward 8 years later, and I still hadn't gone much further. I was now more interested in the effects of deep house music on sleeping mice, but since there were no mice in my house, I tested it on some insects. Most seemed to move as per usual, and only changed direction noticeably when the speaker was right in front of them or very close. I recorded these movements meticulously, using only my pencil and some teeth marks on it. When converting these movements to frequencies which controlled a MIDI orchestra, I was surprised to see how good the ants were at recreating grand works by John Cage! Seems like there's a bit of Cage in all of us.

Finally in year 12 I realized my first large work, and with the help of the London Symphony Orchestra (thanks Sir Colin Davis!) created something truly inspirational. A pure sine tone at 12,475.66680085 Hz was played loudly in the room while the orchestra prepared to improvise randomly filled with Vitamin E, in an effort to witness the effect of a powerful frequency to a supposedly atonal improvisatory setting aided by nutritional supplements. In addition, several live lions were kept nearby to keep the artists from fleeing and a large picture of John Cage was constantly projected on a star filled background in the space telescope we were rehearsing in at the time. As time went on, having microphones inside the performer's mouths proved to be a disappointment since the sound quality was poor and the room mic inside the first violin's boot was not picking up the necessary frequencies. However, the result was exactly as I intended it to the millisecond, and I couldn't be prouder.

Tristan's teapot, however, was out of tune and it made the whole experience a little less appealing. The idea of focused listening fell on unfocused ears and was therefore nullified. However it will be interesting to see how the project turns out, because Tristan has some nice ideas, just needs to think outside the square more (have you tried recording from a helicopter for example?). Either way, a nice way to spend and afternoon and looking forward to more innovative concepts.




Tristan Louth-Robins, student talk presented at EMU space, University of Adelaide, 10th May, 2007.

Sunday, May 13, 2007

Week 9 Audio Arts

I still don't know how to host stuff normally but here's the link to download the mix:

http://www.filefactory.com/file/1db2a3/


For this week's task we had to mix the recording we did in class 3 weeks ago. Having gone through the drums in last week's task, this week we were left to focus on the individual instruments (bass, piano, electric guitar and saxophone). To mix it, I first made a copy of every intstrument, then muted all the copies. Then I went through the instruments one by one and mixed them with their copy as appropriate.

In the bass, I EQ'd and compressed one copy only, then mixed it with the more natural sounding original. The bass is mixed to emphasize frequencies around the 100hz mark, which means you might not get a good effect on poor headphones, but you can definately hear it on good ones, because it is the only instrument that hovers around that area. In the piano, I first copied and inverted the figure 8 (having miked it with an omni and a figure 8), then duplicated all tracks. After panning the original figure 8 hard left and right, I inverted the copies of the figure 8 and panned them a softer left and right. One omni stayed in the middle and the other was a bit to the left to compliment the guitar, which was panned to the right a little.

For the guitar, we only had 1 relatively average track, because there was so much spill. So after copying it, I EQ'd both tracks hard, to place the guitar in its own frequency range, somewhere between 1000 hz and 1500. It had a relatively narrow Q, because it made a good contrast to the other instruments, which were often EQ'd very widely. The guitar was also compressed quite a lot.

For the saxophone, we only had 1 working mic, so I again EQ'd is hard. This time, I emphasized the mid highs and highs, with a little focus on very low frequencies (200 hz and under) to keep it from sounding wussy. I panned it to the middle, and gave it enough volume to cut through everything else when it played because it was the main melodic line.

For the drums, I used one of my previous mixes, but copying the 2 overheads and messing around with their volume gave it a lot more breathing space.

Overall the mix sounds very good to me, my only problem was some slight peaking, and I will find out how to fix this next lesson (I didnt' know how to turn down the master track, or split sections so that I can turn the volume of the whole section down without ruining the mix). Sounds good to me though, every instrument comes through clearly and they all mix nicely.

Tuesday, May 8, 2007

Week 8 Forum Review

Well, it's back. This blog is ready to rock and will not stop.

Last week we had the privilege of being part of the audience who witnessed the second wave of presentations on the topic of "Gender in Music Technology". This topic continues to fascinate all who are subjected to it, but why? Well, as Stephen Whittington bluntly put it, because technology is a penis extension. Men want a penis extension, and women don't (as much). End of topic... Or is it?

Instead of stating the obvious, the presenters tried to explain this simple concept in other, more distorted ways. Brad Leffler, for example, tried to explain that although there is a difference between men and women anatomically (according to his research), music can also be gender free, and more robot-like (robonatomic). As an example of this, he played some Kraftwerk videos. They were interesting, but would have been more enjoyable had we been offered more mushrooms before watching.

Next, we had Laura Gad talk to us not only from a female perspective, but body as well. She tried to show that men and women portray themselves differently in music, and this can be seen in lyrics of songs. She used, as examples of her theory, lyrics from Eminem and Pink. Now Laura, even if we ignore the fact that the bulk of Eminem and Pink music is mostly tonal, and therefore irrelevant to this course and musicians in general, we have to point out that Eminem is probably not a good representation for the typical "male musician", and same with Pink. Eminem's whole "thing" is to be really eccentric, which is what made him stand out from the crowd, and Pink is pretty eccentric too. Take for example, the music of John Cage. Is it ever violent, or offensive? Does he talk about his Lamborghini and hoes? Well, maybe a little. But most of his music is good natured. My point is that although there is a slight difference between male and female artist's "overall" image, it is subtle and given the diversity of musical styles today, you can find anything by anyone. In fact, John Cage could probably rap better than Eminem if he put his mind to it:

"Yo, Yo, Yo they call me, Johnny C, living of mushrooms and tea, rolling dice, killing mice, 123 you rolled 4 twice YEAH!"

Ben Cakebread was next, and he talked about "Queen". Basically he gave an overview of the homosexual element in music, and used the band to show this. Although I would've chosen a different artist to examine (cough John Cage), it was good to hear some real music in a music technology class. Seriously though, If everyone brought along 1-2 songs and presented why they liked them, played them, then examined or just discussed them and the artists, we would be a lot better off than trying to sneak our interests into unrelated presentations on set topics (like I will be doing in week 10).

The last presentation was made by Peter Kelly, who talked about a lot of random topics too numerous and unrelated to follow. He didn't really know where he was going with it either, so I don't feel bad about saying this. It was like a John Cage piece, where you mix up random bits and randomize where they go, and if this is the feel he was going for, hats off.

Overall, another fantastically stimulating forum, filled with excitement and glory.

References: Stephen Whittington “Music Technology Forum Week 8: Gender in Music Technology” Workshop presented at EMU Space, Level 5, Schultz building, University of Adelaide, 3rd May 2007.

Tuesday, March 20, 2007

Forum Review of Week 3 Forum

Today's forum was a journey. It was an opportunity to wake up, smell the roses, and break the chains of oppression. The revolution has began.

If a picture says a thousand words, David Harris' work that was performed today said about 1,160 or so; each word more powerful than the last (and the sum of the whole greater than the individual power of more than 1,700 words or so). But he did not choose to use words to say it... instead, David let him music do the talking. And boy, did it tell a story.

What David basically said, for the first time ever, is that music does not NEED to be tonal or metronomical to make sense. For me, this was a revelation. We are diving into unfamiliar territory here, I know, but bare with me. David showed that a work does not need to fit with "traditional" musical ideals, ones that state that music should be something you enjoy participating in, or at least listening to. This "pleasant" idea originated at the start of music, but the human mind has evolved past these naiive ideals.

Instead, David sees music as an opportunity to exapand your mind. Who would have thought you can grab a group of random notes and just write them all over the page however you feel for however many instruments the class has, and achieve such a powerful effect? David makes a statement that has not been made before, (it hasn't been made about 1,000 times), that music can be music for its own sake. And its very effective because it's such a new idea! David feels music does not need to feel pleasant and why should it? Happiness is just a state of mind, one that sometimes makes its presence accross our daily lives, but how about incrimental irritation? How about confused impatience? Or utter, even humurous disbelief? All these feelings, and even some negative ones were felt during the performance of the work. A true inspiration to the upcomming musician...

We have all heard the Beatles, and Led Zeppelin. But have we heard a random splatter of sound, with highlights including verbal expressions such as "the stomach lyning"? No. Is there a reason for this?... of course. We are IGNORANT. Totally ignorant, in a society that promotes art which creates positive feelings. Art can be so much more, or so much less, as this piece expresses. There is only one form of pure music, and that is the one which takes the least brain capacity to write, therefore leaving the rest of the brain to focus on other things, like dinner and teaching composition. If everyone wrote what sounds good, we would be living in fairy land. But what sounds good? That depends on the person doesn't it. To the common peasant, the commercial sounds of Elvis and Britney Spears are enough to satisfy them. But to the gifted savant, the destruction of harmonic language and common sense is only the beggining of the story... We understand the world on a different level. Tonality is not enough for me any more. I hope some day you'll join me. And the world will live as one... JohnyC, signing off.


...

Supercollider Week 2 Exercise

What doesn't collide, can't SUPERcollide; and what can't supercollide isn't worth colliding with....

SUPERCOLLIDER 3:

The return of the beast...



(
// build a table of note names

var table = ();

value
{
var semitones = [0, 2, 4, 5, 7, 9, 11];
var naturalNoteNames = ["c", "d", "e", "f", "g", "a", "b"];


(0..9).do
{
arg o;

naturalNoteNames.do
{

arg c, i;

var n = (o + 1) * 12 + semitones[i];

table[(c ++ o).asSymbol] = n; table[(c ++ "s" ++ o).asSymbol] = n + 1; //determine sharp
table[(c ++ "ss" ++ o).asSymbol] = n + 2; //determine double sharp
table[(c ++ "b" ++ o).asSymbol] = n - 1; //determine flat
table[(c ++ "bb" ++ o).asSymbol] = n - 2; //determine double flat

};
};
};

"Pitch class and Octave Number, MIDI Note Number, Frequency Value" .postln;
a = table.atAll (#[a4].postln).postln;// Creates MIDI Note number --Enter Pitch class and Octave number here.


a = 2**((a-69)/12) *440; //Coverts MIDI note number to frequency value


)

Tuesday, March 13, 2007

Supercollider Week 1 execise

After much trial and error and learning stuff, a couple of us have manged to get together a working prototype of the midi to frequency conversion in the infamos SUPERCOLLIDER!!!

Supercollider is the best. If you are in 2nd year I feel sorry cause you won't get to experience the pure joy just yet, but hold on. John Cage with you.


(
// build a table of note names

var table = ();

value
{
var semitones = [0, 2, 4, 5, 7, 9, 11];
var naturalNoteNames = ["c", "d", "e", "f", "g", "a", "b"];



(0..9).do
{
arg o;

naturalNoteNames.do
{

arg c, i;

var n = (o + 1) * 12 + semitones[i];

table[(c ++ o).asSymbol] = n;
table[(c ++ "s" ++ o).asSymbol] = n + 1;
table[(c ++ "ss" ++ o).asSymbol] = n + 2;
table[(c ++ "b" ++ o).asSymbol] = n - 1;
table[(c ++ "bb" ++ o).asSymbol] = n - 2;
};
};
};


a = table.atAll(#[c4]); // change the number where it says c4 to get frequency

a = 2**((a-69)/12) *440;

)

Forum Review: Thursday March 8

This is the first of thousands of forums I will be attending this year, but it will remain the first I wrote my opinion of.

Stepen Whittington raised the question of originality in this forum, and spoke a little about some aspects of what could be considered a fairly debatable topic. He asked us to consider what is really originality, and gave examples of other cultures and how they view the topic. In some traditions, like in the Indian classical music field, one gets allowed to explore further away from the basics of music as they gain more experience, and trully understand what they are doing. This view was enforced by a story about a teacher and a boy who chose he would improvise extensively, only to be tied to a tree (I wonder what the indian teacher would've thought about John Cage, who breaks all the rules; like a rockstar only he breaks even more rules).

This story demonstrates that in some cultures, patience, hard work and gradual improvement is valued more than creativity. When the student is experienced enough to improvise, however, I can't help but wonder if his musical results might be a lot less creative than if he was allowed and encouraged to create early in his life (like it seems we are encouraged to do in our Western society). I think that this might be a sign that classical Indian music is still developing and has a long way to go before reaching its full potential, because I can't help to wonder if Western composers in the Baroque era and earlier might not have been similarly brought up. Slowly, composers started breaking the mould a little, and then came Beethoven, who shatered traditional views and gave way to the rise of real romanticism in music; real freedom. Maybe the Indian Beethoven is still to come? By this I don't in any way mean to disrespect Indian classical music, but raise an interesting question. For some reason, Indian classical music has a similar effect on me to baroque or earlier Western classical music, and I am just wondering whether it will stay the same in the next hundred years or so.

Stephen played some music that also made the class wonder about the effect of musical education on the creativity of a person. He specifically played 3 works by Erik Satie, who had had limited theoretical trainning before creating most of his famous works. However, after attending musical tertiary education later in his life, his works lost some of the innocent flavor that made him original and stand out. I can't help feeling this way sometimes; for some reason, I feel my most original works were done around highschool, with the least sophisticated equipment, and limited musical knowledge. Of course, you need to know a little, but knowing a lot seems to be able to tame the creative nature. Yes, the music might sound better, or more logical, or even more complex, but it is lacking an ellement that makes it unique, or emotional. This could be due to the fact that a lot of commercial music is based around the same tonal structure, and more you know about it, the more you want to explore inside it, rather than venturing out and just playing what "sounds" good. And what sounds good to a musician, I am learning, is different from what sounds good to a random person (except John Cage, who sounds good to everyone).

Another fact that I find affects creativity is the ammount of choice offered to work with. If you have a limited selection/knowledge of chords, you might make more interesting music because you are trying to express something complex through something simple which leads to interest! It is interesting because it is different, yet it is not offensive because both the harmonic language and ideas are valid. It is the same with a choice of instruments; I find that if I have to create a piece for just MIDI instruments, my ideas will be more interesting than when creating for a choice of thousands of synthesizers.

This could be because if you have an idea and say it exactly like you want, it is just boring. There is no tension, therefore no interest. You know what is comming up.

"I am this guy who is really awsome and super rich and make the best music and I'm so cool and probably have my own car but I'm not sure if cars were around when I was living"

is more interesting than:

"I am John Cage."

Capisce?

Then we heard Roadrunner, by John Zorn, which sounded ok for an according piece, but I don't usually like works that are made up of chopped up bits from other works, I just don't think they are interesting or offer anything trully interesting. It also comes down to what originality is, but in my view, taking 50 riffs from 50 places is just not composing. It is researching.


Anyways, a pretty good forum, looking forward to more.




Stephen Whittington, "Music Technology Forum"

Lecture presented at the University of Adelaide, South Australia, 8/03/2007.

Monday, March 12, 2007

Music Technology Year 3 Intro

Just started my awsome new blog at this new address. Anyways there will be MUCH fun times to be had here. Let's hope we have fun. Anyways I will start posting before Thursday's class in week 3 cause I only just got a new blog because the other one was too old and I forgot my password.

Until next time,
Regards.