• Elvin Ou

week 4-5- First live performance


Continuing to experience and play around with Max, I have selected some of my favorite objects/effects and kind of put them together to see what it turns out to be for inspiration of my midterm project.

From there I kind of have a general idea of the visuals I had in mind. My initial idea came up as to create a performance/ experience that is disorienting and "pulling". I was really fascinated by the xfade and jit.matrix where it kind of pulls and pushes the pixels in the existing video and recompose them to be something really abstract.

After that I thought it would be fun to use camera input instead of videos but then it eventually turned out to be kind of boring to me. I went back to couple footages that I took and re-put them together. After kind of setting everything up I decided to stick with the minimal objects thats I had and to use a different medium, audio, to bring some "sense" to this playback system. After researching and checking couple videos on youtube I found this cool audio system where when the audio reaches a certain pitchpeak it triggers a bang to change videos within a list. From there I starting trying different audio files with it and started playing the playback system along with the audio to see. Then a new problem appeared, I kind of lost my own touch in this experience since the videos are performing according to the sound. From there I decided to add a another layer of videos and mix them all together to apply my "performance" into it. The visuals turned out to be even more disorienting which I actually really enjoyed.

After deciding the all the objects and variables that I needed I started playing testing on computer scree and then a 70" tv screen. The play test did not go that well, I was very surprised how much the performance lagged when I tried to use hdmi connection with the screen, which is also gonna be the same for the presentation for midterm. The huge delay made the performance almost impossible since I really need to "sync" the variable video playback with the audio beats manually. Sadly, that problem was unsolved until the midterm happened.

Going back playing testing on computer again using airplay, the delay problem was solved so I could actually focus on how the experience is and what would be the improvements might be. From that, the biggest part I think I should focus on is to categorize or organize the variables in the way that it makes "sense". To be able to do that I had to listen to the audio so many so many times and kind of sorted out the different areas and beats that the videos should correspond to. For example, match the transitioning softer part with the train video since it is more one directional and repetitive; Overlay two different videos for smaller beat transition etc.. It was actually pretty fun and the most amazing part was that every time I playtested I was playing something different.