Published on Fri Nov 20 2020

ATSal: An Attention Based Architecture for Saliency Prediction in 360 Videos

Yasser Dahou, Marouane Tliba, Kevin McGuinness, Noel O'Connor

ATSal is a novel attention based (head-eye) perceptions model for 360 video. The attention mechanism explicitly encodes global static visual attention. Expert models can focus on learning the saliency on local patches throughout consecutive frames.

0
0
0
Abstract

The spherical domain representation of 360 video/image presents many challenges related to the storage, processing, transmission and rendering of omnidirectional videos (ODV). Models of human visual attention can be used so that only a single viewport is rendered at a time, which is important when developing systems that allow users to explore ODV with head mounted displays (HMD). Accordingly, researchers have proposed various saliency models for 360 video/images. This paper proposes ATSal, a novel attention based (head-eye) saliency model for 360\degree videos. The attention mechanism explicitly encodes global static visual attention allowing expert models to focus on learning the saliency on local patches throughout consecutive frames. We compare the proposed approach to other state-of-the-art saliency models on two datasets: Salient360! and VR-EyeTracking. Experimental results on over 80 ODV videos (75K+ frames) show that the proposed method outperforms the existing state-of-the-art.

Wed Jan 22 2020
Computer Vision
A Fixation-based 360{\deg} Benchmark Dataset for Salient Object Detection
0
0
0
Mon May 24 2021
Computer Vision
SHD360: A Benchmark Dataset for Salient Human Detection in 360° Videos
Salient human detection (SHD) is of great importance for robotics, inter-human and human-object interaction in augmented reality. There is a lack of large-scale omnidirectional videos and rich annotations. We hope our proposed dataset could serve as a good starting point for advancing
8
1
0
Tue Mar 13 2018
Computer Vision
A Learning-Based Visual Saliency Prediction Model for Stereoscopic 3D Video (LBVS-3D)
Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D content. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes and high-level cues.
0
0
0
Tue Dec 13 2016
Computer Vision
How do people explore virtual environments?
The study of how people explore immersive virtual environments is crucial for many applications. We capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas. We provide a thorough analysis of our data, which leads to several important insights.
0
0
0
Tue Sep 19 2017
Computer Vision
SalNet360: Saliency Maps for omni-directional images with CNN
The prediction of Visual Attention data from any kind of media is of valuable use to content creators. The current trend in the Virtual Reality field is starting to gain momentum. We show that each step in the proposed pipeline works toward making the generated saliency map more accurate.
0
0
0
Wed Sep 11 2019
Computer Vision
Distortion-adaptive Salient Object Detection in 360 Omnidirectional Images
Image-based salient object detection (SOD) has been extensively explored in the past decades. SOD on omnidirectional images is less studied owing to the lack of datasets with pixel-level annotations. This paper proposes a 360 image-based SOD dataset that contains 500 high-resolution equirectangular images.
0
0
0