Adaptive deep perception methods for autonomous surface vehicles
Collaborating partners: University of Ljubljana, Faculty of Computer and Information Science, Faculty of Electrical Engineering; Sirio d.o.o.
Type of the research project: basic research project, financed by Slovenian Research Agency (project code: J2-2506)
Project duration: 1st September 2020 - 31th August 2023
Acronym: DAViMaR (Deep adaptive vision for marine robotics)
- izr. prof. dr. Matej Kristan (PI)
- doc. dr. Janez Perš (PI on FE side)
- izr. prof. dr. Danijel Skočaj
- mag. Borja Bovcon
- mag. Alan Lukežič
- mag. Jon Natanael Muhovič
- mag. Lojze Žušt
- mag. Mozetič Dean
- Aljoša Žerjal
A crucial element for autonomous operation is environment perception, which still lags far behind the control and hardware research. The perception capability is additionally limited by the physical constraints of small-sized USVs, which prohibits the use of heavy, power consuming sensors. Cameras as light-weight, low-power and information rich sensors have attracted considerable attention on their own and in combination with other modalities like LIDAR and RADAR.
In a closely related field of autonomous vehicles (AV), recent perception advancements have been primarily driven by the deep learning paradigm. The paradigm allows unification of individual perception tasks, leading to substantial improvements of individual tasks. However, SOTA deep models developed for AVs underperform in maritime environment even if they are re-trained on a large maritime dataset. New deep maritime-specific architectures are thus required that would allow adaptation to the highly dynamic maritime environment and to allow low-effor deployment of USVs trained on one maritime scene to another.
The project's overarching goal is to develop the next-generation maritime environment perception methods, which will harvest the power of end-to-end trainable deep models. The models will address the challenges essential for safe USV operation like general obstacle detection, long-term tracking with re-identification, implicit detection of hazardous areas and sensor fusion for improved detection. Particular focus will be placed on the adaptivity of the models and self-supervised tuning to new environments. New multisensor datasets are planned to be recorded to facilitate this research.
Workpackages: The work is divided into six work packages. The first four address the project scientific goals:
- Deep models for robust obstacle detection with scene adaptation capabilities (WP1).
- Segmentation-based tracking algorithms compatible with the deep obstacle detection architectures (WP2).
- Deep trainable multimodal methods for environment perception (WP3).
- Annotated multimodal USV datasets for training and objective evaluation of deep networks in realistic scenarios (WP4).
Work packages WP5 and WP6 contain support activities such as results dissemination and project management. In the following we detail the work packages and tasks.
- Year 1: Activities on work packages WP1, WP2, WP4, WP5, WP6
- Year 2: Activities on work packages WP1, WP2, WP3, WP4, WP5, WP6
- Year 3: Activities on work packages WP1, WP3, WP4, WP5, WP6
Scientific output of our work within the project is described in these publications: