Published on Wed Aug 28 2019

SPair-71k: A Large-scale Benchmark for Semantic Correspondence

Juhong Min, Jongmin Lee, Jean Ponce, Minsu Cho

Establishing visual correspondences under large intra-class variations is often referred to as semantic correspondence or semantic matching. In this paper, we present a new large-scale benchmark dataset of semantically paired images, SPair-71k.

0
0
0
Abstract

Establishing visual correspondences under large intra-class variations, which is often referred to as semantic correspondence or semantic matching, remains a challenging problem in computer vision. Despite its significance, however, most of the datasets for semantic correspondence are limited to a small amount of image pairs with similar viewpoints and scales. In this paper, we present a new large-scale benchmark dataset of semantically paired images, SPair-71k, which contains 70,958 image pairs with diverse variations in viewpoint and scale. Compared to previous datasets, it is significantly larger in number and contains more accurate and richer annotations. We believe this dataset will provide a reliable testbed to study the problem of semantic correspondence and will help to advance research in this area. We provide the results of recent methods on our new dataset as baselines for further research. Our benchmark is available online at http://cvlab.postech.ac.kr/research/SPair-71k/.