Conversational sonification

Overview

Data sonification is the practice of communicating data through sounds — graphs for your ears. But unlike visualizations, which many of us have been looking at since we were young, most of us have to learn how to interpret sonifications by developing analytic listening skills. But what if a voice assistant could help you craft personalized sounds that meaningfully represent data, help you train your ear to listen to them, and respond as you ask questions about the data you are hearing? These kinds of interactions could support people as they develop new capacities to learn and think using technology, as well as encourage general audiences to engage more deeply with data and sound.

This project is led by PhD candidate Jordan Wirfs-Brock through her dissertation research, advised by Brian Keegan, Laura Devendorf, Arturo Cortez, Jofish Kaye, and Sarah Mennicken.

Key Research Questions

  • What interaction and narrative patterns can support people in learning how to listen closely to music, sonifications, and other sounds?
  • How might we design interactive, conversational voice recipes that scaffold people as they learn how to listen to data narratives?
  • How do people’s relationships with interactive, voice-based data narratives support sensemaking over time?

From Radio Narratives to Interactive Voice Assistants

In this project, the goal is to develop new interaction technologies for people to engage with sonifications in a conversational setting. Ultimately, we will use voice assistants as a platform for exploration, but we are building from how people might explain sonifications in a conversation way on the radio. Here are some examples sonifications embedded in an interview narrative for the radio show Marketplace:

  • [LINK 1]
  • [LINK 2]

After these radio pieces aired, we analyzed them through a retrospective process to identify design principles that could used to adapt conversational sonification to voice assistants. This work will be published in a forthcoming issue of TOCHI.

Video: Data and Sound recipes

In this video, from the Loud Numbers Sonification Festival, Jordan explains conversational sonification and presents interactive data and sound recipes in front of a live audience.

[VIDEO EMBED - Need to figure out how to do in markdown]

Publications

To add: ICAD, DIS doctoral consortium…

Published: