multimodal-panoptic-segmentation-of-3d-point-clouds.pdf

The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal ap...

Πλήρης περιγραφή

Λεπτομέρειες βιβλιογραφικής εγγραφής
Γλώσσα:English
Έκδοση: KIT Scientific Publishing 2023
Διαθέσιμο Online:https://doi.org/10.5445/KSP/1000161158
id oapen-20.500.12657-76838
record_format dspace
spelling oapen-20.500.12657-768382023-10-17T02:10:09Z Multimodal Panoptic Segmentation of 3D Point Clouds Dürr, Fabian Temporal Fusion; Sensor Fusion; Semantic Segmentation; Panoptic Segmentation; Zeitliche Fusion; Semantische Segmentierung; Panoptische Segmentierung; Sensorfusion; Deep Learning bic Book Industry Communication::U Computing & information technology::UY Computer science::UYA Mathematical theory of computation::UYAM Maths for computer scientists The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion. 2023-10-16T10:21:05Z 2023-10-16T10:21:05Z 2023 book https://library.oapen.org/handle/20.500.12657/76838 eng Karlsruher Schriften zur Anthropomatik application/pdf Attribution-ShareAlike 4.0 International multimodal-panoptic-segmentation-of-3d-point-clouds.pdf https://doi.org/10.5445/KSP/1000161158 KIT Scientific Publishing 10.5445/KSP/1000161158 10.5445/KSP/1000161158 44e29711-8d53-496b-85cc-3d10c9469be9 62 248 open access
institution OAPEN
collection DSpace
language English
description The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion.
title multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
spellingShingle multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
title_short multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
title_full multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
title_fullStr multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
title_full_unstemmed multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
title_sort multimodal-panoptic-segmentation-of-3d-point-clouds.pdf
publisher KIT Scientific Publishing
publishDate 2023
url https://doi.org/10.5445/KSP/1000161158
_version_ 1799945271938908160