SOUNDVISION

As a researcher at the Academy for Theater and Digitality in Dortmund, I developed SOUNDVISION, the open-source successor of CYLvision.

SOUNDVISION was created as an open-source Unity and PureData-based artistic toolset for reactive real-time visualizations of sound, music, and movement. The goal was to make performances more visually perceivable, including for deaf audiences.

Visualization of sound is always an interpretation, a translation, a transformation from one sense to another. I explored whether we can generalize this translation, shorten the necessary steps, or if the absolute subjective, artistic interpretation is the key factor to make visuals tangible.

It was crucial for me to not use visuals as a mere translational system. My main interest — which has increasingly become my expertise — was in working on music and their visuals simultaneously, where one art form reciprocally informs and inspires the other. Decisions and actions echoed back and forth during the creative process.

Project Presentation @ Akademie für Theater und Digitalität Dortmund

Reactively visualizing all dimensions of a sound is a complex task. Not only common or just bipolar parameters, such as dynamics, pitch, or articulation, but also the more complex relations and connections between sounds were visualized. Additionally, a sound-reactive virtual embodiment of a performer, provided by e.g. 3D Camera Input, further amplified the connection between performer and sound.

Many core elements of the code, such as the SharedMemory Object for Pd, necessary for sending analysis data from Pd to Unity, were developed by my collaborator and mentor, Dr. Chikashi Miyama.

Links:

Github Repository

https://github.com/strangerattractor/Soundvision_PUBLIC