Visual Tracking Evaluation
Introduction
Visual tracking is one of the rapidly evolving fields of computer vision. Every year, literally dozens of new tracking algorithms are presented and evaluated in journals and at conferences. When considering the evaluation of these new trackers and comparison to the state-of-the-art, several questions arise. Is there a standard set of sequences that we can use for the evaluation? Is there a standardized evaluation protocol? What kind of performance measures should we use? Unfortunately, there are currently no definite answers to these questions. Unlike some other fields of computer vision, like object detection and classification, optical-flow computation, and automatic segmentation, where widely adopted evaluation protocols are used, visual tracking is still largely lacking these features.
Methodology
The problem of visual tracking evaluation is sporting an abundance of performance measures, which are used by various authors, and largely suffers from lack of consensus about which measures should be preferred. This is hampering the cross-paper tracker comparison and faster advancement of the field. In our research we show that several measures are equivalent from the point of information they provide for tracker comparison and, crucially, that some are more brittle than the others. Based on this analysis we narrow down the set of potential measures to only two complementary ones that can be intuitively interpreted and visualized, thus pushing towards homogenization of the tracker evaluation methodology.
Raw results of the experiments on which we base the analysis, presented in the paper Visual object tracking performance measures revisited, published in IEEE Transactions on Image processing in 2016 will be available soon.
We have also developed a ranking methodology for large-scale visual tracker comparison that takes into account different aspects of tracking as well as statistical significance of performance difference.
Visual Object Tracking Challenge (VOT)
The advances in evaluation methodology are promoted by the Visual Object Tracking Challenge that we organize. The results of the VOT2013 challenge were presented at a workshop at the ICCV2013 in Sydney, Australia. The results of the VOT2014 challenge have been presented at ECCV2014 in Zürich, Switzerland. The results of the VOT2015 challenge have been presented at ICCV2015 in Santiago de Chile, Chile.
How to compare your tracker on VOT?
To compare your tracker on VOT benchmarks you can follow the tutorials on the VOT homepage.
Apparent Motion Patterns (AMP)
This approach uses omnidirectional videos to generate various motion patterns in a controlled manner. More information available here.
CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark
A fully-annotated RGB-D dataset for visual object tracking. More information available on the project page.
Lont-term Visual Object Tracking Performance Evaluation
The new long-term visual object tracking performance evaluation methodology, measures and the dataset. More information available here.