Published on Wed Oct 10 2018

Invariance Analysis of Saliency Models versus Human Gaze During Scene Free Viewing

Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min

Most of current studies on human gaze and saliency modeling have used high-quality stimuli. In real world, however, captured images undergo various types of distortions. Some distortion types include motion blur, lighting variations and rotation.

0
0
0
Abstract

Most of current studies on human gaze and saliency modeling have used high-quality stimuli. In real world, however, captured images undergo various types of distortions during the whole acquisition, transmission, and displaying chain. Some distortion types include motion blur, lighting variations and rotation. Despite few efforts, influences of ubiquitous distortions on visual attention and saliency models have not been systematically investigated. In this paper, we first create a large-scale database including eye movements of 10 observers over 1900 images degraded by 19 types of distortions. Second, by analyzing eye movements and saliency models, we find that: a) observers look at different locations over distorted versus original images, and b) performances of saliency models are drastically hindered over distorted images, with the maximum performance drop belonging to Rotation and Shearing distortions. Finally, we investigate the effectiveness of different distortions when serving as data augmentation transformations. Experimental results verify that some useful data augmentation transformations which preserve human gaze of reference images can improve deep saliency models against distortions, while some invalid transformations which severely change human gaze will degrade the performance.

Thu May 16 2019
Computer Vision
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most studies on human attention and saliency modeling have used high quality stimuli. In real world, captured images undergo various types of transformations. Can we use these transformations to augment existing datasets?
0
0
0
Wed Oct 23 2019
Computer Vision
SalGaze: Personalizing Gaze Estimation Using Visual Saliency
Traditional gaze estimation methods typically require explicit user calibration to achieve high accuracy. SalGaze is able to greatly augment standard point calibration data with implicit video saliency calibration data. We show accuracy improvements over 24% using our technique on existing methods.
0
0
0
Thu May 14 2015
Computer Vision
Vanishing Point Attracts Eye Movements in Scene Free-viewing
Eye movements are crucial in understanding complex scenes. By predicting where humans look in natural scenes, we can understand how they percieve scenes.
0
0
0
Sun May 30 2021
Computer Vision
Gaze Estimation using Transformer
The performance of transformers in gaze estimation is still unexplored. Hybrid transformers significantly outperforms the pure transformer in all evaluation metrics.
1
0
0
Fri Mar 11 2016
Computer Vision
Learning Gaze Transitions from Depth to Improve Video Saliency Estimation
In the near future 3D video content will be easily acquired and yet hard to display. This can be explained, on the one hand, by the dramatic improvement of 3D-capable equipment.
0
0
0
Tue Oct 29 2019
Computer Vision
SID4VAM: A Benchmark Dataset with Synthetic Images for Visual Attention Modeling
A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as influence of model inspiration and consistency with human psychophysics. Images were generated with 15 distinct types of low-level features.
0
0
0