Aleš Leonardis, PhD
Professor
Full professor at University of Birmingham
Full professor at University of Birmingham
ales.leonardis@fri.uni-lj.si
From July 2012 with
University of Birmingham
School of Computer Science
Birmingham, UK
Adjunct Professor for Computer Science of the Faculty of Computer Science of Graz University of Technology
Research
This page deals with the problem of how to track objects as they undergo non-rigid deformations, and as parts become invisible. The page contains some preliminary work on a combined local and global visual model, that simultaneously changes its structure by adapting to the potentially nonrigid target and localizing it in the image.
We deal with a problem of Multi-class Object Representation and present a framework for learning a hierarchical shape vocabulary capable of representing objects in hierarchical manner using a statistically important compositional shapes. The approach takes simple oriented contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class specific shape compositions, each exerting a high degree of shape variability
One of the main research topics of our group in the past were subspace methods. We have proposed several methods for robust estimation of subspace coefficients and for learning subspace representations. By combining the properties of the reconstructive and the discriminative subspace methods we were able to extend the standard approaches into incremental and robust ones, and to show the efficiency of the proposed methods in many computer vision tasks.
Main Projects
Current projects
Our challenge is to develop a methodology that would bridge the gap between the computer-centered low-level image features and the high-level human-centered semantic meanings.
ARRS project (J2-3607)
Project duration: 2010 - 2013.
ARRS project (J2-3607)
Project duration: 2010 - 2013.
The project aims at a holistic approach towards learning, detection and recognition / categorisation of the visual motion and the phenomena derived from it. The approach is based on a novel and powerful paradigm of learning multilayer compositional hierarchies. While individual ingredients, such as the hierarchical processing, compositionality and incremental learning, have already been subjects of a research, they have, to the best of our knowledge, never been treated in a unified motionrelated framework. Such a framework is crucial for robustness, versatility, ease of learning and inference, generalisation, realtime performance, transfer of the knowledge, and scalability for a variety of cognitive vision tasks. ARRS Project. Project duration: 2011 - 2014.
The research group is involved in basic research in computer vision, with emphasis on visually enabled cognitive systems involving visual learning and recognition. Topics include recognition and tracking of objects, scenes, and activities in visual cognitive tasks such as smart vision-based detection and positioning using wearable computing as well as for mobile robots and cognitive assistants.
Project duration: 2009-2014.
Project duration: 2009-2014.
Past projects
The high level aim of this project was to develop a unified theory of self-understanding and self-extension with a convincing instantiation and implementation of this theory in a robot. By self-understanding we mean that the robot has representations of gaps in its knowledge or uncertainty in its beliefs. By self-extension we mean the ability of the robot to extend its own abilities or knowledge by planning learning activities and carrying them out. The project involved six universities and about 30 researchers.
POETICON is a research project in the Seventh framework programme, that explores the “poetics of everyday life”, i.e. the synthesis of sensorimotor representations and natural language in everyday human interaction.
The MOBVIS project identified the key issue for the realisation of smart mobile vision services to be the application of context to solve otherwise intractable vision tasks. In order to achieve this challenging goal, MOBVIS claimed that three components, (1) multi-modal context awareness, (2) vision based object recognition, and (3) intelligent map technology, should be combined for the first time into a completely innovative system - the attentive interface.
Visiontrain Project addressed the problem of understanding vision from both computational and cognitive points of view. The research approach was based on formal mathematical models and on the thorough experimental validation of these models. 11 academic partners worked cooperatively on a number of targeted research objectives: (i) computational theories and methods for low-level vision, (ii) motion understanding from image sequences, (iii) learning and recognition of shapes, objects, and categories, (iv) cognitive modelling of the action of seeing, and (v) functional imaging for observing and modelling brain activity.
The main goal of the project was to advance the science of cognitive systems through a multi-disciplinary investigation of requirements, design options and trade-offs for human-like, autonomous, integrated, physical (eg., robot) systems, including requirements for architectures, for forms of representation, for perceptual mechanisms, for learning, planning, reasoning and motivation, for action and communication.
The main objective of the EU FP5 project CogVis was to provide the methods and techniques that enable construction of vision systems that can perform task oriented categorization and recognition of objects and events in the context of an embodied agent.