For video editors and motion designers, aligning video and animations to audio can be a tedious and manual experience. Syncmaster leverages Adobe Research's audio analysis technology to make audio data actionable, ultimately enabling them to be more expressive and efficient.
One of the first things the Syncmaster team worked on was a refreshed visualization of the waveform. The waveform is split into three bands: low, mid, and high frequencies. Using the new waveform, its possible to be able to pick out individual instruments or notes in the music that you wouldn’t see before.
The expanded audio timeline automatically detects beats, events, and amplitude. From here, editors can select which elements they want to edit to.
We found that audio data is a key driving force in the editing and syncing of videos and motion graphics. And people want to use this data to choreograph their visuals with an audio track's tempo, composition, dynamics, etc.