Fine-granularity semantic video annotation: An approach based on automatic shot level concept detection and object recognition

Author: El-Khoury Vanessa   Jergler Martin   Bayou Getnet Abebe   Coquil David   Kosch Harald  

Publisher: Emerald Group Publishing Ltd

ISSN: 1742-7371

Source: International Journal of Pervasive Computing and Communications, Vol.9, Iss.3, 2013-08, pp. : 243-269

Disclaimer: Any content in publications that violate the sovereignty, the constitution or regulations of the PRC is not accepted or approved by CNPIEC.

Previous Menu Next

Abstract

Purpose - A fine-grained video content indexing, retrieval, and adaptation requires accurate metadata describing the video structure and semantics to the lowest granularity, i.e. to the object level. The authors address these requirements by proposing semantic video content annotation tool (SVCAT) for structural and high-level semantic video annotation. SVCAT is a semi-automatic MPEG-7 standard compliant annotation tool, which produces metadata according to a new object-based video content model introduced in this work. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. The paper aims to discuss these issues. Design/methodology/approach - A systematic keyframes classification into ImageNet categories is used as the basis for automatic concept detection in temporal units. This is then followed by an object tracking algorithm to get exact spatial information about objects. Findings - Experimental results showed that SVCAT is able to provide accurate object level video metadata. Originality/value - The new contribution in this paper introduces an approach of using ImageNet to get shot level annotations automatically. This approach assists video annotators significantly by minimizing the effort required to locate salient objects in the video.