This week we have to master an electronic piece and luckily we are able to try it on one of mine. The piece I chose is fairly weak sounding on purpose and lacks a lot of punch, however might be able to be pumped up a bit with mastering. I haven't started mastering it before I posted this up so this is where I'm starting from too:
Enjoy! :D
http://www.box.net/shared/mytyldcb9q
Wednesday, May 30, 2007
Tuesday, May 29, 2007
Week 11 Forum Review
Well, deconstruct me a constructor, we had another week of deconstruction talks. I found the deconstruction element to feature more widely in these talks as opposed to construction. What do people have against construction? Is it because my house is next to one getting built and you don't want to offend me? Or is it because we attend the CON servatorium, and we feel we need to make up for the missing DE in our lives?
Either way, Simon Whitelock was first. I don't understand the "DJ" movement, and I have to say I never heard of my mentor using such commercial equipment. All cage needed was a bowl and some dice, as well as 1-2 well prepare mushroom meal ideas. Instead, we are hearing talks about people stealing ideas from other people (JACKONSTRUCTING) and CONSTRUCTING a new work from them. Is this a good idea? All I know is that if it's not random it's not worth doing. The devil's work, such as DJing is trying to alure you into a world of magical logical harmony and constant beats. The chance of the bassdrum comming up on exactly the first crotchet of every bar if you are randomizing it to semiquaver levels is 2^16, and the fact that this occours throughout the piece PROVES it is not random. Also, tonality seems to live and breathe through this music and this is somethign we need not encourage. Free yourselves. Take a break from this jibberish, I say, take a chance and roll the dice.
Nathan Shea was next, and the presence of strong noise content in poor quality recordings was a welcome change from the bestiality we had been subjected to. I could relate this to my own interests and if the guitars had been replaced with a sinewave modulated by the drums which could be teapots, we would be on to something.
Last and best, John Delay. A man among men. A trooper against adversity. The real deal. If we had a war, John should be the commander in chief. A veteran of rhythm and a pariot of harmony. John Delay is all this, and was even more in his eye opening presentation. Any music with little percussive content and slow constant change is to be appreciated, but the examples he played gave me a tingling in my ningling I hadn't experienced since "4'33: Live aus Berlin" came out. A wonderful expansion of the senses, and one that made sense. What a way to end the day, and what a day it was.
Either way, Simon Whitelock was first. I don't understand the "DJ" movement, and I have to say I never heard of my mentor using such commercial equipment. All cage needed was a bowl and some dice, as well as 1-2 well prepare mushroom meal ideas. Instead, we are hearing talks about people stealing ideas from other people (JACKONSTRUCTING) and CONSTRUCTING a new work from them. Is this a good idea? All I know is that if it's not random it's not worth doing. The devil's work, such as DJing is trying to alure you into a world of magical logical harmony and constant beats. The chance of the bassdrum comming up on exactly the first crotchet of every bar if you are randomizing it to semiquaver levels is 2^16, and the fact that this occours throughout the piece PROVES it is not random. Also, tonality seems to live and breathe through this music and this is somethign we need not encourage. Free yourselves. Take a break from this jibberish, I say, take a chance and roll the dice.
Nathan Shea was next, and the presence of strong noise content in poor quality recordings was a welcome change from the bestiality we had been subjected to. I could relate this to my own interests and if the guitars had been replaced with a sinewave modulated by the drums which could be teapots, we would be on to something.
Last and best, John Delay. A man among men. A trooper against adversity. The real deal. If we had a war, John should be the commander in chief. A veteran of rhythm and a pariot of harmony. John Delay is all this, and was even more in his eye opening presentation. Any music with little percussive content and slow constant change is to be appreciated, but the examples he played gave me a tingling in my ningling I hadn't experienced since "4'33: Live aus Berlin" came out. A wonderful expansion of the senses, and one that made sense. What a way to end the day, and what a day it was.
Sunday, May 27, 2007
Week 11 Audio Arts
This week we had to master a stereo file we previously mixed. I chose to use this mix:
http://www.box.net/shared/rsj69b7gji
(last week's submission)
And here are the 3 mixes I tried doing. The 2nd might be the best but I'm not happy with any.
Mix 1: http://www.box.net/shared/qdk3812ban (Too much high frequency adds to the peaks... however, drowns them out in a weird way and is less obvious than I thought it would be)
Mix 2: http://www.box.net/shared/8viqk1ea44 (Better than mix 2, less highs, more compression. Piano sounds less natural than I would like and a lot worse than the original, however, so by taking the annoying 15,000 frequency out I killed the piano sound)
Mix 3: http://www.box.net/shared/42u0xoemnb (Highs are reduced then amplified with a 2nd EQ later on... Result is annoying peak is still there. Sounds more open than 2nd mix but peaks are a lot more obvious)
I found it very difficult to get a decent result. I tried using a lot of different EQ and compressor settings but the fact that the recording was not perfect (had some peaks) was accentuated with every attempt. This was especially evident in the piano. In addition, the mixed version (not mastered) sounded solid to me and I could find little to actually fix. The drums were probably the weakest instrument, and the bass could have a little more volume higher up (around 100-150hz) but appart from that I was very happy. The mixing was mainly an effort into taking the peaks out and compressing it a little. I also wanted to give it just a little more top end and bass, but not much. However, whenever I took out the piano peaks (which sit around 15,000 hz) by EQing them heavily, the overall sound suffers a LOT especially in the piano and drums (snare mainly). To make up I tried adding a 2nd EQ to boost the overall high frequency range (over 10,000) after taking out the 15,000, but the result was still flat. Also I found no way of boosting the drums or making them sound better. Basically here are 3 attemps I went through, but I still preffer the original (especially since the addition of any EQ's either flattens the sound by taking the peaks away or makes them even MORE obvious). Compression also adds to the peaks. Overall I tried a lot of different things but would like to learn more about this topic before I am happy with my skills. Luckily we're doing more next week!
http://www.box.net/shared/rsj69b7gji
(last week's submission)
And here are the 3 mixes I tried doing. The 2nd might be the best but I'm not happy with any.
Mix 1: http://www.box.net/shared/qdk3812ban (Too much high frequency adds to the peaks... however, drowns them out in a weird way and is less obvious than I thought it would be)
Mix 2: http://www.box.net/shared/8viqk1ea44 (Better than mix 2, less highs, more compression. Piano sounds less natural than I would like and a lot worse than the original, however, so by taking the annoying 15,000 frequency out I killed the piano sound)
Mix 3: http://www.box.net/shared/42u0xoemnb (Highs are reduced then amplified with a 2nd EQ later on... Result is annoying peak is still there. Sounds more open than 2nd mix but peaks are a lot more obvious)
I found it very difficult to get a decent result. I tried using a lot of different EQ and compressor settings but the fact that the recording was not perfect (had some peaks) was accentuated with every attempt. This was especially evident in the piano. In addition, the mixed version (not mastered) sounded solid to me and I could find little to actually fix. The drums were probably the weakest instrument, and the bass could have a little more volume higher up (around 100-150hz) but appart from that I was very happy. The mixing was mainly an effort into taking the peaks out and compressing it a little. I also wanted to give it just a little more top end and bass, but not much. However, whenever I took out the piano peaks (which sit around 15,000 hz) by EQing them heavily, the overall sound suffers a LOT especially in the piano and drums (snare mainly). To make up I tried adding a 2nd EQ to boost the overall high frequency range (over 10,000) after taking out the 15,000, but the result was still flat. Also I found no way of boosting the drums or making them sound better. Basically here are 3 attemps I went through, but I still preffer the original (especially since the addition of any EQ's either flattens the sound by taking the peaks away or makes them even MORE obvious). Compression also adds to the peaks. Overall I tried a lot of different things but would like to learn more about this topic before I am happy with my skills. Luckily we're doing more next week!
Tuesday, May 22, 2007
Week 10 Forum Review
Last week's forum was the best because I got to do a presentation. Everybody thought I was super cool, and I totally was. It made everyone feel happy and eager to listen to more talks. My talk was about Construction and Deconstruction, a topic I suggested we study because I saw great learning potential... was I ever right! From the moment I started talking, I could see in people's eyes a glowing, growing passion for learning. From my talk they learned about construction in musical pieces (introduction), deconstruction in musical pieces (the end bit) and thousands more topics. It was great to finally be able to get out there and publicly express these burning feelings about construction and deconstruction. This could have been the perfect topic for me for a number of reasons. Firstly, I've been constructing all my life. I've been constructing more cells as I grew, I've been constructing musical works, and I've especially been construction an eagerness to get these feelings out about construction and deconstruction. To my own credit, I've also been deconstructing a lot through my short career: deconstructing plates when I broke them, deconstructing nuclear reactors, and deconstructing food through a powerful but effecting digestive system. If I had more time, I would've deconstructed a lot more but for my young age this is a start... In the future, I look forward to constructing a lot more opportunities for me to deconstruct this topic for young audiences. This, hopefully was the start of many such opportunities.
There were also other presenters present, who also presented. They also chose my topic.
Matt Mazzone presented about an add he wrote music for and "deconstructed" his process of "constructing" it. I was surprised at the level of professionalism Matt displayed in his choice of sound and music. It fit the images very well, went along with the story but most importantly: was NOT INTRUSSIVE. This shows a mature understanding of the add as a whole, and you can tell Matt didn't try to write a masterpiece, but rather, complement a video. The music and sounds were well chosen and worked well, and his other adds were also good. Good stuff!
Frederick May also presented about popular music and how its systematic construction can effectively make it a hit song or not. He went on to analyse (DEC on Struction) some hit songs, and made some generalizations about number 1 singles, and why they work. He claimed that every number 1 single shares a number of factors that make it stand out, and every B side that accompanies those lacks those exact elements. Although this is probably true to an extent, it is impossible to think there are exceptions. For example, what if a B side makes it to number 1 in a country but nowhere else? Does that mean that country is wrong? Does that song HAVE the features of a number 1 or not? Frederick, however, made a lot of interesting points about how these songs are constructed and played some music to back up his claims. He was a good public speaker and fun to listen to, but the generalisations were a bit too generalisatory (told a general story, or told a general a story, or story about a general).
Generally speaking, the presentations were presented well, and everybody in the audience had a great time. I am looking forward to next week's speakers, who will deconstruct this topic for us a bit more.
Music Technology Forum Presentation, EMU Space. Lecture presented at the University of Adelaide, 17th May 2007
There were also other presenters present, who also presented. They also chose my topic.
Matt Mazzone presented about an add he wrote music for and "deconstructed" his process of "constructing" it. I was surprised at the level of professionalism Matt displayed in his choice of sound and music. It fit the images very well, went along with the story but most importantly: was NOT INTRUSSIVE. This shows a mature understanding of the add as a whole, and you can tell Matt didn't try to write a masterpiece, but rather, complement a video. The music and sounds were well chosen and worked well, and his other adds were also good. Good stuff!
Frederick May also presented about popular music and how its systematic construction can effectively make it a hit song or not. He went on to analyse (DEC on Struction) some hit songs, and made some generalizations about number 1 singles, and why they work. He claimed that every number 1 single shares a number of factors that make it stand out, and every B side that accompanies those lacks those exact elements. Although this is probably true to an extent, it is impossible to think there are exceptions. For example, what if a B side makes it to number 1 in a country but nowhere else? Does that mean that country is wrong? Does that song HAVE the features of a number 1 or not? Frederick, however, made a lot of interesting points about how these songs are constructed and played some music to back up his claims. He was a good public speaker and fun to listen to, but the generalisations were a bit too generalisatory (told a general story, or told a general a story, or story about a general).
Generally speaking, the presentations were presented well, and everybody in the audience had a great time. I am looking forward to next week's speakers, who will deconstruct this topic for us a bit more.
Music Technology Forum Presentation, EMU Space. Lecture presented at the University of Adelaide, 17th May 2007
Tuesday, May 15, 2007
Week 9 Forum Review
This week the diploma students presented talks on famous producers and learned about their techniques. Fortunately, I was part of a different group, one which formed the audience to Tristan Louth-Robins' presentation on his masters project. To say I was spellbound would be to overstate my feelings, seeing that a number of factors severely detracted from what could have been an enlightening talk.
While Tristan was presenting his concepts, my mind drifted back to my first experiments of similar nature, and an overwhelming sense of nostalgia filled my sinuses. I was 7 when my first encounter with the avante garde poked its head. Having noticed there is a discernible difference in sound quality between having headphones on my head and someone else wearing them, I decided to experiment with this phenomenon. My attempts were, indeed, naive, but these same concepts seem to drive me today. I remember getting my hand on all the headphones I could, then placing them on a tree in a spiral pattern so that each ear speaker is enveloped in a length of the adjacent pair's chord proportional to the diameter of the speaker cone combined with an algorhythm vaguely combining pi and the golden ratio (give or take a cent or 2... I was only 7). When music is played from a nearby fountain, the frequencies tend to align themselves harmonically, which could be displayed visually if you shined an ultraviolet beam through the water and onto a water colored fish tank enveloped in a thin layer of silver coated aluminum. There was some sort of movement in the water when there was wind, so I proclaimed the experiment a success.
Fast-forward 8 years later, and I still hadn't gone much further. I was now more interested in the effects of deep house music on sleeping mice, but since there were no mice in my house, I tested it on some insects. Most seemed to move as per usual, and only changed direction noticeably when the speaker was right in front of them or very close. I recorded these movements meticulously, using only my pencil and some teeth marks on it. When converting these movements to frequencies which controlled a MIDI orchestra, I was surprised to see how good the ants were at recreating grand works by John Cage! Seems like there's a bit of Cage in all of us.
Finally in year 12 I realized my first large work, and with the help of the London Symphony Orchestra (thanks Sir Colin Davis!) created something truly inspirational. A pure sine tone at 12,475.66680085 Hz was played loudly in the room while the orchestra prepared to improvise randomly filled with Vitamin E, in an effort to witness the effect of a powerful frequency to a supposedly atonal improvisatory setting aided by nutritional supplements. In addition, several live lions were kept nearby to keep the artists from fleeing and a large picture of John Cage was constantly projected on a star filled background in the space telescope we were rehearsing in at the time. As time went on, having microphones inside the performer's mouths proved to be a disappointment since the sound quality was poor and the room mic inside the first violin's boot was not picking up the necessary frequencies. However, the result was exactly as I intended it to the millisecond, and I couldn't be prouder.
Tristan's teapot, however, was out of tune and it made the whole experience a little less appealing. The idea of focused listening fell on unfocused ears and was therefore nullified. However it will be interesting to see how the project turns out, because Tristan has some nice ideas, just needs to think outside the square more (have you tried recording from a helicopter for example?). Either way, a nice way to spend and afternoon and looking forward to more innovative concepts.
Tristan Louth-Robins, student talk presented at EMU space, University of Adelaide, 10th May, 2007.
While Tristan was presenting his concepts, my mind drifted back to my first experiments of similar nature, and an overwhelming sense of nostalgia filled my sinuses. I was 7 when my first encounter with the avante garde poked its head. Having noticed there is a discernible difference in sound quality between having headphones on my head and someone else wearing them, I decided to experiment with this phenomenon. My attempts were, indeed, naive, but these same concepts seem to drive me today. I remember getting my hand on all the headphones I could, then placing them on a tree in a spiral pattern so that each ear speaker is enveloped in a length of the adjacent pair's chord proportional to the diameter of the speaker cone combined with an algorhythm vaguely combining pi and the golden ratio (give or take a cent or 2... I was only 7). When music is played from a nearby fountain, the frequencies tend to align themselves harmonically, which could be displayed visually if you shined an ultraviolet beam through the water and onto a water colored fish tank enveloped in a thin layer of silver coated aluminum. There was some sort of movement in the water when there was wind, so I proclaimed the experiment a success.
Fast-forward 8 years later, and I still hadn't gone much further. I was now more interested in the effects of deep house music on sleeping mice, but since there were no mice in my house, I tested it on some insects. Most seemed to move as per usual, and only changed direction noticeably when the speaker was right in front of them or very close. I recorded these movements meticulously, using only my pencil and some teeth marks on it. When converting these movements to frequencies which controlled a MIDI orchestra, I was surprised to see how good the ants were at recreating grand works by John Cage! Seems like there's a bit of Cage in all of us.
Finally in year 12 I realized my first large work, and with the help of the London Symphony Orchestra (thanks Sir Colin Davis!) created something truly inspirational. A pure sine tone at 12,475.66680085 Hz was played loudly in the room while the orchestra prepared to improvise randomly filled with Vitamin E, in an effort to witness the effect of a powerful frequency to a supposedly atonal improvisatory setting aided by nutritional supplements. In addition, several live lions were kept nearby to keep the artists from fleeing and a large picture of John Cage was constantly projected on a star filled background in the space telescope we were rehearsing in at the time. As time went on, having microphones inside the performer's mouths proved to be a disappointment since the sound quality was poor and the room mic inside the first violin's boot was not picking up the necessary frequencies. However, the result was exactly as I intended it to the millisecond, and I couldn't be prouder.
Tristan's teapot, however, was out of tune and it made the whole experience a little less appealing. The idea of focused listening fell on unfocused ears and was therefore nullified. However it will be interesting to see how the project turns out, because Tristan has some nice ideas, just needs to think outside the square more (have you tried recording from a helicopter for example?). Either way, a nice way to spend and afternoon and looking forward to more innovative concepts.
Tristan Louth-Robins, student talk presented at EMU space, University of Adelaide, 10th May, 2007.
Sunday, May 13, 2007
Week 9 Audio Arts
I still don't know how to host stuff normally but here's the link to download the mix:
http://www.filefactory.com/file/1db2a3/
For this week's task we had to mix the recording we did in class 3 weeks ago. Having gone through the drums in last week's task, this week we were left to focus on the individual instruments (bass, piano, electric guitar and saxophone). To mix it, I first made a copy of every intstrument, then muted all the copies. Then I went through the instruments one by one and mixed them with their copy as appropriate.
In the bass, I EQ'd and compressed one copy only, then mixed it with the more natural sounding original. The bass is mixed to emphasize frequencies around the 100hz mark, which means you might not get a good effect on poor headphones, but you can definately hear it on good ones, because it is the only instrument that hovers around that area. In the piano, I first copied and inverted the figure 8 (having miked it with an omni and a figure 8), then duplicated all tracks. After panning the original figure 8 hard left and right, I inverted the copies of the figure 8 and panned them a softer left and right. One omni stayed in the middle and the other was a bit to the left to compliment the guitar, which was panned to the right a little.
For the guitar, we only had 1 relatively average track, because there was so much spill. So after copying it, I EQ'd both tracks hard, to place the guitar in its own frequency range, somewhere between 1000 hz and 1500. It had a relatively narrow Q, because it made a good contrast to the other instruments, which were often EQ'd very widely. The guitar was also compressed quite a lot.
For the saxophone, we only had 1 working mic, so I again EQ'd is hard. This time, I emphasized the mid highs and highs, with a little focus on very low frequencies (200 hz and under) to keep it from sounding wussy. I panned it to the middle, and gave it enough volume to cut through everything else when it played because it was the main melodic line.
For the drums, I used one of my previous mixes, but copying the 2 overheads and messing around with their volume gave it a lot more breathing space.
Overall the mix sounds very good to me, my only problem was some slight peaking, and I will find out how to fix this next lesson (I didnt' know how to turn down the master track, or split sections so that I can turn the volume of the whole section down without ruining the mix). Sounds good to me though, every instrument comes through clearly and they all mix nicely.
http://www.filefactory.com/file/1db2a3/
For this week's task we had to mix the recording we did in class 3 weeks ago. Having gone through the drums in last week's task, this week we were left to focus on the individual instruments (bass, piano, electric guitar and saxophone). To mix it, I first made a copy of every intstrument, then muted all the copies. Then I went through the instruments one by one and mixed them with their copy as appropriate.
In the bass, I EQ'd and compressed one copy only, then mixed it with the more natural sounding original. The bass is mixed to emphasize frequencies around the 100hz mark, which means you might not get a good effect on poor headphones, but you can definately hear it on good ones, because it is the only instrument that hovers around that area. In the piano, I first copied and inverted the figure 8 (having miked it with an omni and a figure 8), then duplicated all tracks. After panning the original figure 8 hard left and right, I inverted the copies of the figure 8 and panned them a softer left and right. One omni stayed in the middle and the other was a bit to the left to compliment the guitar, which was panned to the right a little.
For the guitar, we only had 1 relatively average track, because there was so much spill. So after copying it, I EQ'd both tracks hard, to place the guitar in its own frequency range, somewhere between 1000 hz and 1500. It had a relatively narrow Q, because it made a good contrast to the other instruments, which were often EQ'd very widely. The guitar was also compressed quite a lot.
For the saxophone, we only had 1 working mic, so I again EQ'd is hard. This time, I emphasized the mid highs and highs, with a little focus on very low frequencies (200 hz and under) to keep it from sounding wussy. I panned it to the middle, and gave it enough volume to cut through everything else when it played because it was the main melodic line.
For the drums, I used one of my previous mixes, but copying the 2 overheads and messing around with their volume gave it a lot more breathing space.
Overall the mix sounds very good to me, my only problem was some slight peaking, and I will find out how to fix this next lesson (I didnt' know how to turn down the master track, or split sections so that I can turn the volume of the whole section down without ruining the mix). Sounds good to me though, every instrument comes through clearly and they all mix nicely.
Tuesday, May 8, 2007
Week 8 Forum Review
Well, it's back. This blog is ready to rock and will not stop.
Last week we had the privilege of being part of the audience who witnessed the second wave of presentations on the topic of "Gender in Music Technology". This topic continues to fascinate all who are subjected to it, but why? Well, as Stephen Whittington bluntly put it, because technology is a penis extension. Men want a penis extension, and women don't (as much). End of topic... Or is it?
Instead of stating the obvious, the presenters tried to explain this simple concept in other, more distorted ways. Brad Leffler, for example, tried to explain that although there is a difference between men and women anatomically (according to his research), music can also be gender free, and more robot-like (robonatomic). As an example of this, he played some Kraftwerk videos. They were interesting, but would have been more enjoyable had we been offered more mushrooms before watching.
Next, we had Laura Gad talk to us not only from a female perspective, but body as well. She tried to show that men and women portray themselves differently in music, and this can be seen in lyrics of songs. She used, as examples of her theory, lyrics from Eminem and Pink. Now Laura, even if we ignore the fact that the bulk of Eminem and Pink music is mostly tonal, and therefore irrelevant to this course and musicians in general, we have to point out that Eminem is probably not a good representation for the typical "male musician", and same with Pink. Eminem's whole "thing" is to be really eccentric, which is what made him stand out from the crowd, and Pink is pretty eccentric too. Take for example, the music of John Cage. Is it ever violent, or offensive? Does he talk about his Lamborghini and hoes? Well, maybe a little. But most of his music is good natured. My point is that although there is a slight difference between male and female artist's "overall" image, it is subtle and given the diversity of musical styles today, you can find anything by anyone. In fact, John Cage could probably rap better than Eminem if he put his mind to it:
"Yo, Yo, Yo they call me, Johnny C, living of mushrooms and tea, rolling dice, killing mice, 123 you rolled 4 twice YEAH!"
Ben Cakebread was next, and he talked about "Queen". Basically he gave an overview of the homosexual element in music, and used the band to show this. Although I would've chosen a different artist to examine (cough John Cage), it was good to hear some real music in a music technology class. Seriously though, If everyone brought along 1-2 songs and presented why they liked them, played them, then examined or just discussed them and the artists, we would be a lot better off than trying to sneak our interests into unrelated presentations on set topics (like I will be doing in week 10).
The last presentation was made by Peter Kelly, who talked about a lot of random topics too numerous and unrelated to follow. He didn't really know where he was going with it either, so I don't feel bad about saying this. It was like a John Cage piece, where you mix up random bits and randomize where they go, and if this is the feel he was going for, hats off.
Overall, another fantastically stimulating forum, filled with excitement and glory.
References: Stephen Whittington “Music Technology Forum Week 8: Gender in Music Technology” Workshop presented at EMU Space, Level 5, Schultz building, University of Adelaide, 3rd May 2007.
Last week we had the privilege of being part of the audience who witnessed the second wave of presentations on the topic of "Gender in Music Technology". This topic continues to fascinate all who are subjected to it, but why? Well, as Stephen Whittington bluntly put it, because technology is a penis extension. Men want a penis extension, and women don't (as much). End of topic... Or is it?
Instead of stating the obvious, the presenters tried to explain this simple concept in other, more distorted ways. Brad Leffler, for example, tried to explain that although there is a difference between men and women anatomically (according to his research), music can also be gender free, and more robot-like (robonatomic). As an example of this, he played some Kraftwerk videos. They were interesting, but would have been more enjoyable had we been offered more mushrooms before watching.
Next, we had Laura Gad talk to us not only from a female perspective, but body as well. She tried to show that men and women portray themselves differently in music, and this can be seen in lyrics of songs. She used, as examples of her theory, lyrics from Eminem and Pink. Now Laura, even if we ignore the fact that the bulk of Eminem and Pink music is mostly tonal, and therefore irrelevant to this course and musicians in general, we have to point out that Eminem is probably not a good representation for the typical "male musician", and same with Pink. Eminem's whole "thing" is to be really eccentric, which is what made him stand out from the crowd, and Pink is pretty eccentric too. Take for example, the music of John Cage. Is it ever violent, or offensive? Does he talk about his Lamborghini and hoes? Well, maybe a little. But most of his music is good natured. My point is that although there is a slight difference between male and female artist's "overall" image, it is subtle and given the diversity of musical styles today, you can find anything by anyone. In fact, John Cage could probably rap better than Eminem if he put his mind to it:
"Yo, Yo, Yo they call me, Johnny C, living of mushrooms and tea, rolling dice, killing mice, 123 you rolled 4 twice YEAH!"
Ben Cakebread was next, and he talked about "Queen". Basically he gave an overview of the homosexual element in music, and used the band to show this. Although I would've chosen a different artist to examine (cough John Cage), it was good to hear some real music in a music technology class. Seriously though, If everyone brought along 1-2 songs and presented why they liked them, played them, then examined or just discussed them and the artists, we would be a lot better off than trying to sneak our interests into unrelated presentations on set topics (like I will be doing in week 10).
The last presentation was made by Peter Kelly, who talked about a lot of random topics too numerous and unrelated to follow. He didn't really know where he was going with it either, so I don't feel bad about saying this. It was like a John Cage piece, where you mix up random bits and randomize where they go, and if this is the feel he was going for, hats off.
Overall, another fantastically stimulating forum, filled with excitement and glory.
References: Stephen Whittington “Music Technology Forum Week 8: Gender in Music Technology” Workshop presented at EMU Space, Level 5, Schultz building, University of Adelaide, 3rd May 2007.
Subscribe to:
Posts (Atom)