Interactive Visualization
Here you can see a 10-minute excerpt of the original output data of the neural network developed for Deep.Dance. In the rehearsal process, the dancers used this tool - with a few additions - to learn and transcribe the exact movements of their figure. Each dancer developed their own approach to being able to translate almost sixty minutes of fairly random movements into an audio transcript. All movements of the AI figures were translated into a personal audio code, with countdowns, spoken images, instructions and rhythmic sounds. In the end, it was synchronised with the original timing of the AI choreography and each dancer listened to the audio track via an earpiece during the show - this way, the most accurate interpretation of the AI choreography was possible.
Press the left mouse or touchpad button and move the mouse or touchpad to look around. Scroll with your mouse or touchpad to zoom in or out.
AI choreography, figure A (10 Min) - transcribed and performed by Girish Kumar
AI choreography, figure B (10 Min) - transcribed and performed by Raymond Liew Jin
AI choreography, figure C (10 Min) - transcribed and performed by Lisa Rykena