Abstract:
Computer-generated visualisations can accompany recorded or live music to create novel audiovisual experiences for audiences. We present a system to streamline the creation of audio-driven visualisations based on audio feature extraction and mapping interfaces. Its architecture is based on three modular software components: backend (audio plugin), frontend (3D game-like environment), and middleware (visual mapping interface). We conducted a user evaluation comprising two stages. Results from the first stage (34 participants) indicate that music visualisations generated with the system were significantly better at complementing the music than a baseline visualisation. Nine participants took part in the second stage involving interactive tasks. Overall, the system yielded a Creativity Support Index above average (68.1) and a System Usability Scale index (58.6) suggesting that ease of use can be improved. Thematic analysis revealed that participants enjoyed the system’s synchronicity and expressive capabilities, but found technical problems and difficulties understanding the audio feature terminology.

Screenshot Screenshot

A video explaining and demonstrating the system is available via YouTube.
For a demo, skip to 5:05.

Paper (Open Access coming soon)Code