Unite All OriginalsAdidas
A voice-activated interactive music video for Run DMC and Adidas.See the project
The Unite All Originals experience sees hip-hop legends Run DMC collide with turntablist extraordinaire A-Trak in New York, wreaking creative havoc across the city. Users are invited to create their own visual remix of the video by shouting out things they see in the film, triggering animations and special effects in real time. Each interaction unlocks rewards related to the new FW13 adidas originals range — which then connect to the Adidas e-commerce platform.
We experimented with the Chrome Web Speech API which has, until now, been predominantly conceived for interpreting continuous speech. This presented an interesting challenge for the 'Unite All Originals' experience, where the main input commands were single unrelated keywords from the video. Researching the newly released Chrome Web Speech API allowed us to gain access to interim transcripts and create our own bespoke language processing tools to evaluate distances between expected input and what was requested, to provide a meaningful result in the video.
A simple keyboard input is used as as a fallback for users without modern browsers. We chose to use the keyboard (rather than a mouse point-click) as it allows us to directly connect a command to a result, without introducing a superfluous layer required to follow the pointer, creating an undisturbed connection between fingers and eyes. To remove any possible friction while using the keyboard, we have also undertaken several user tests that allowed us to maximize the command throughput, providing as a final result a neat auto-completion and suggestion mechanism for user commands.
The most demanding and time consuming tasks faced during this development have been those related to assets implementation, and the obsessive frame-precise craftsmanship of the front-end team. During the process we partnered with Machine Molle who provided the plates and the cell animation elements layered on top, but translating these layers to interactivity has been incredibly complex. The final result is a melting pot of different techniques, using frame tracking, rotoscoping, frame masking, obstructions, dynamic animations using vectors, colour corrections and so forth.
The end result provides hundreds of animations, trigger-able through more than 15000 keywords localized in over 50 languages. Hopefully it's also a step forward in understanding how speech can be used as a tool for interacting with video content.
- Post Production
- Sound Design
FWASite Of The Day