Research
ViCoS Lab is involved in the following research topics:
We explore deep learning models for industrial quality control. Our focus is on surface-defect detection for which we have developed a novel two-stage architecture that uses segmentation network in the first stage and decision network in the second stage. We also present a novel dataset with real-case scenario for surface defect detection. This research is done in collaboration with the industry partner Kolektor Group d.o.o.
We explore automation of traffic-sign inventory management using deep-learning models. Models such as Faster R-CNN and Mask R-CNN are improved and applied to traffic sign detection. Instead of specializing in automated detection for only several traffic sign categories we explore possibility of automating the detection of over 200 different traffic signs that are needed to automate the traffic-sign inventory management.
We propose a novel deep network architecture that combines the benefits of discriminative deep learning and the benefits of compositional hierarchies. As one of the benefits we emphasize the ability to automatically adjust receptive fields to either small or large receptive fields depending on the for problem at hand and the ability to visualize deep features through explicit compositional structure.
One of the problems of visual tracking evaluation is a lack of a consistent evaluation methodology. This is hampering the cross-paper tracker comparison and faster advancement of the field. In our research we investigate different aspects of tracking evaluation. A continuous effort that is a part of our work is also the Visual Object Tracking Challenge (VOT).
Discriminative Correlation Filters are very popular in visual object tracking due to the efficient implementation and great tracking performance results. Here we present several improvements in discriminative correlation filters.
This page deals with the problem of how to track objects as they undergo non-rigid deformations, and as parts become invisible. The page contains some preliminary work on a combined local and global visual model, that simultaneously changes its structure by adapting to the potentially nonrigid target and localizing it in the image.
ViCoS Eye is an experimental online service that aims to demonstrate a state-of-the-art computer vision object detection and categorization algorithm developed in our laboratory. Web-service is available in a form of a web-page and in a form of an Android application.
Unmnanned surface vehicles (USV) are robotic boats that can be used for coastal patrolling in a numerous applications ranging from surveillance to water cleanness control. We are developing computer vision algorithms that enable autonomous operation in the highly dynamic environments in which the USVs are applied.
In the EU FP7 project CogX we have developed a corious robot George; a complex heterogenuous distributed system for interactive learning of visual concepts in a dialogue with a tutor. Our objective was to demonstrate that a cognitive system can efficiently acquire conceptual models in an interactive learning process that is not overly taxing with respect to tutor supervision and is performed in an intuitive, user-friendly way.
In our research work we address the problem of interactive learning of categorical knowledge. We describe, implement, and analyse several teacher and learner-driven approaches that require different levels of teacher competencies and consider different types of knowledge for selection of training samples. We also introduce a formal model for describing the learning strategies, and evaluate them using the proposed evaluation measures.
We deal with a problem of Multi-class Object Representation and present a framework for learning a hierarchical shape vocabulary capable of representing objects in hierarchical manner using a statistically important compositional shapes. The approach takes simple oriented contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class specific shape compositions, each exerting a high degree of shape variability
We have designed approaches for 2D laser-range-data-based room categorization that are grounded on a compositional hierarchical representation of space. We have also developed a part-based image representation that is suitable for robust vision-based room categorization.
We have been addressing the problem of how to incrementally build generative as well as discriminative models from data using positive as well as negative examples and considering as little assumptions as possible about the input data. The developed algorithms are based on Kernel Density Estimator and use different measures of discrimination loss to determine how much a classifier can be compressed without modifying its performance.
A wide range of algorithms have been proposed to detect objects in still images. However, most of the current approaches are purely based on local appearance and ignore the context in which these objects are embedded. Within our research on visual context we propose a general approach to extract, learn and use contextual information from images to increase the performance of classical object detection methods.
We developed algorithms and representations for autonomous mapping and visual localization of mobile robots from omni-directional images. We also developed a purely emergent hierarchical mapping algorithm based on “recover and select” that creates maps of wide and unstructured environments from local appearance and odometry.
The main task of the affordance learning algorithm as defined in our framework, is to identify significant clusters in the result space and associate these clusters with data in the object property space. This allow for the affordances of novel objects to be broadly classified in terms of result space clusters, by observing their respective object property features and using them as input to a classifier trained by the affordance learning algorithm.
One of the main research topics of our group in the past were subspace methods. We have proposed several methods for robust estimation of subspace coefficients and for learning subspace representations. By combining the properties of the reconstructive and the discriminative subspace methods we were able to extend the standard approaches into incremental and robust ones, and to show the efficiency of the proposed methods in many computer vision tasks.