Semantics for 3D – 3D for Semantics -
27 February - 1 March, 2017 - Porto, Portugal
In conjunction with the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - VISIGRAPP 2017
Technical University of Berlin
Ronny Hänsch received the Diploma degree in computer science and the Ph.D. degree from the Technische Universität Berlin, Berlin, Germany, in 2007 and 2014, respectively. His research interests include computer vision, machine learning, object detection, neural networks and Random Forests. He worked in the field of object detection and classification from remote sensing images, with a focus on polarimetric synthetic aperture radar images. His recent research interests focus on the development of probabilistic methods for 3D reconstruction by structure from motion as well as ensemble methods for image analysis.
Trinity College Dublin
Konstantinos Amplianitis is currently a Postdoctoral Research Fellow at the Graphics Vision and Visualization Group at Trinity College Dublin. Prior to this appointment, he was a Research Associate at the Humboldt University of Berlin where he obtained his PhD in Computer Science in the area of 3D object recognition. He also holds a M.Sc. degree in Geodesy and Geoinformation Science from Berlin Institute of Technology and a B.Sc. in Geomatics Engineering from the Technological Educational Institute of Athens respectively. He is a reviewer for the IEEE Image Undestanding Journal, ISPRS Journal and ISPRS Commission I. He has also served as a session chair for the International Conference on Computer Vision Theory and Applications (VISAPP). Current research interest is in the fascinating area of Deep Learning and its application to creative visual technologies.
Images have been used in two major, mostly independent ways: Either to provide a semantic interpretation or to estimate the 3D structure of the projected scene. Recent approaches attempt to join both directions by using either semantic scene knowledge to support the 3D reconstruction (“Semantics for 3D”) or 3D information for the semantic analysis (“3D for Semantics”). S3-3S has been an ongoing research topic for academics as well as for the industry. Recent research on deep learning and machine learning are contributing to methods for automatic semantic analysis and object representations, while companies working on 3D applications collect images and 3D data, which are transformed into semantic and structural scene knowledge. Applications range from creative technologies to mobile robotics.
This workshop is dedicated to methods that make joint use of semantic and structural information in order to improve the estimation of either the structural or semantic content of the scene. Submissions for this workshop must address relevant topics in either semantic processing of image and/or 3D data for 3D reconstruction or the usage of 3D information in image understanding. Paper for this workshop must address relevant topics in 3D reconstruction based on semantic processing of images and/or 3D data or the usage of 3D information for image understanding. Technical topics of interest include (but are not limited to): Prior knowledge: - Semantic or structural prior knowledge for 3D reconstruction - Usage of object knowledge to reconstruct surfaces with non-Lambertian reflectance - Detection of geometric primitives in point clouds - Local shape priors Object Detection: - Semantic 3D reconstruction - Semantic SLAM - Object Detection in 3D or RGB-D data - Person detection, tracking, and behavioral understanding - Detection, classification, and segmentation of dynamic or static obstacles Representation: - Data structures and mathematical models to represent, access, manipulate, or visualize structural information, i.e. prior object knowledge, point clouds, surfaces, environment maps - Semantic segmentation of point clouds - Visualization of semantic information in point clouds - CAD models Special Processing: - Real time 3D modeling - Sparsity inducing optimization for 3D reconstruction - High accuracy 3D reconstruction - Large-scale analytics Sensor analytics: - Single image reconstruction / Depth constraints in single images - Stereo camera systems - Time of flight (ToF) sensors - Laser scanning - Sensor fusion (e.g. camera images and laser scanner data) Biology inspired: - Human perception of shape and its potential implications to 3D reconstructions Applications: - Industrial applications including service & maintenance, driver assistance, video surveillance & monitoring, inspection - Datasets - Robot control based on real time 3D perception - 3D reconstruction from UAVs - Visual odometry
WORKSHOP PROGRAM COMMITTEE
Prospective authors are invited to submit papers in any of the topics listed above.
Instructions for preparing the manuscript (in Word and Latex formats) are available at: Paper Templates
Please also check the Guidelines.
Papers must be submitted electronically via the web-based submission system using the appropriated button on this page.
After thorough reviewing by the workshop program committee complemented by members of the main conference program committee, all accepted papers will be published in a special section of the conference proceedings book - under an ISBN reference and on CD-ROM support.
All papers presented at the conference venue will be available at the SCITEPRESS Digital Library (http://www.scitepress.org/DigitalLibrary/).
SCITEPRESS is a member of CrossRef (http://www.crossref.org/) and every paper is given a DOI (Digital Object Identifier).