This paper introduces a new video-and-language dataset with human actions for
multimodal logical inference, which focuses on intentional and aspectual
expressions that describe dynamic human actions. The dataset consists of 200
videos, 5,554 action labels, and 1,942 action triplets of the form that can be translated into logical semantic
representations. The dataset is expected to be useful for evaluating multimodal
inference systems between videos and semantically complicated sentences
including negation and quantification.