Published on Thu Dec 01 2016

Video Captioning with Multi-Faceted Attention

Xiang Long, Chuang Gan, Gerard de Melo

Video captioning has been attracting an increasing amount of interest. While existing methods rely on different kinds of visual features, they do not fully exploit relevant semantic information. Our novel architecture builds on LSTMs with several attention layers and two multimodal layers.

0
0
0
Abstract

Recently, video captioning has been attracting an increasing amount of interest, due to its potential for improving accessibility and information retrieval. While existing methods rely on different kinds of visual features and model structures, they do not fully exploit relevant semantic information. We present an extensible approach to jointly leverage several sorts of visual features and semantic attributes. Our novel architecture builds on LSTMs for sentence generation, with several attention layers and two multimodal layers. The attention mechanism learns to automatically select the most salient visual features or semantic attributes, and the multimodal layer yields overall representations for the input and outputs of the sentence generation component. Experimental results on the challenging MSVD and MSR-VTT datasets show that our framework outperforms the state-of-the-art approaches, while ground truth based semantic attributes are able to further elevate the output quality to a near-human level.

Wed May 08 2019
Computer Vision
Multimodal Semantic Attention Network for Video Captioning
We propose a Multimodal Semantic Attention Network(MSAN), which is a new encoder-decoder framework incorporating multimodal semantic attributes for video captioning. We employ attention mechanism to pay attention to different attributes at each time of the captioning process.
0
0
0
Sun Apr 15 2018
Artificial Intelligence
Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning
A major challenge for video captioning is to combine audio and visual cues. We propose a novel hierarchically aligned cross-modal attention (HACA) framework to learn and selectively fuse both global and local temporal Dynamics of different modalities.
0
0
0
Thu Nov 17 2016
Computer Vision
Multimodal Memory Modelling for Video Captioning
Video captioning which automatically translates video clips into natural language sentences is a very important task in computer vision. In this paper, we propose a Multimodal Memory Model (M3) to describe videos, which builds a visual and textual shared memory.
0
0
0
Wed Dec 26 2018
Computer Vision
Hierarchical LSTMs with Adaptive Attention for Visual Captioning
Most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words. We propose a hierarchical LSTM with adaptive attention (hLSTMat) approach for image and video captioning.
0
0
0
Fri Nov 01 2019
Machine Learning
Low-Rank HOCA: Efficient High-Order Cross-Modal Attention for Video Captioning
The attention-based encoder-decoder structures have been widely used in video captioning. In literature, the attention weights are often built from the information of an individual modality. The association relationships between multiple modalities are neglected.
0
0
0
Wed Nov 23 2016
Computer Vision
Video Captioning with Transferred Semantic Attributes
Long Short-Term Memory with Transferred Semantic Attributes (LSTM-TSA) is a novel deep architecture that incorporates the transferred semantic attributes learnt from images and videos into the CNN plus RNN framework. The design is inspired by the fact that semantic attributes play a significant contribution to captioning.
0
0
0