Published on Thu Feb 28 2019

From Visual to Acoustic Question Answering

Jerome Abdelnour, Giampiero Salvi, Jean Rouat

Acoustic Question Answering (AQA) task consists of analyzing an acoustic scene composed by a combination of elementary sounds. The kind of relational questions asked, require that the models perform non-trivial reasoning in order to answer correctly.

0
0
0
Abstract

We introduce the new task of Acoustic Question Answering (AQA) to promote research in acoustic reasoning. The AQA task consists of analyzing an acoustic scene composed by a combination of elementary sounds and answering questions that relate the position and properties of these sounds. The kind of relational questions asked, require that the models perform non-trivial reasoning in order to answer correctly. Although similar problems have been extensively studied in the domain of visual reasoning, we are not aware of any previous studies addressing the problem in the acoustic domain. We propose a method for generating the acoustic scenes from elementary sounds and a number of relevant questions for each scene using templates. We also present preliminary results obtained with two models (FiLM and MAC) that have been shown to work for visual reasoning.

Fri Jun 11 2021
NLP
NAAQA: A Neural Architecture for Acoustic Question Answering
The goal of the Acoustic Question Answering (AQA) task is to answer a free-form text question about the content of an acoustic scene. We also introduce NAAQA, a neural architecture that leverages specific properties of acoustic inputs.
0
0
0
Mon Nov 26 2018
Machine Learning
CLEAR: A Dataset for Compositional Language and Elementary Acoustic Reasoning
We introduce the task of acoustic question answering (AQA) in the area of acoustic reasoning. In this task an agent learns to answer questions on the basis of acoustic context. We provide AQA datasets of various sizes as well as the data generation code.
0
0
0
Thu Nov 21 2019
Machine Learning
Temporal Reasoning via Audio Question Answering
Multimodal question answering tasks can be used as proxy tasks to study systems that can perceive and reason about the world. Answering questions about different types of input modalities stresses different aspects of reasoning such as visual reasoning, reading comprehension, story understanding, or navigation.
0
0
0
Tue Dec 22 2020
Machine Learning
AudioViewer: Learning to Visualize Sound
Sensory substitution can help persons with perceptual deficits. In this work, we attempt to visualize audio with video. Our long-term goal is to create sound perceptions for hearing impaired people.
0
0
0
Wed Sep 01 2021our pick
NLP
WebQA: Multihop and Multimodal QA
Web search is fundamentally multimodal and multihop. We propose to bridge this gap between the natural language and computer vision communities with WebQA. Our challenge for the community is to create a unified reasoning model.
0
0
0
Thu Jun 24 2021
Computer Vision
AudioCLIP: Extending CLIP to Image, Text and Audio
In the past, the rapidly evolving field of sound classification greatly benefited from the application of methods from other domains. Today, we observe the trend to fuse domain-specific tasks and approaches together, which provides the community with new outstanding models.
4
87
400