GRAPP 2020 Abstracts


Area 1 - Geometry and Modeling

Full Papers
Paper Nr: 6
Title:

Superpoints in RANSAC Planes: A New Approach for Ground Surface Extraction Exemplified on Point Classification and Context-aware Reconstruction

Authors:

Dimitri Bulatov, Dominik Stütz, Lukas Lucks and Martin Weinmann

Abstract: In point clouds obtained from airborne data, the ground points have traditionally been identified as local minima of the altitude. Subsequently, the 2.5D digital terrain models have been computed by approximation of a smooth surfaces from the ground points. But how can we handle purely 3D surfaces of cultural heritage monuments covered by vegetation or Alpine overhangs, where trees are not necessarily growing in bottom-to-top direction? We suggest a new approach based on a combination of superpoints and RANSAC implemented as a filtering procedure, which allows efficient handling of large, challenging point clouds without necessity of training data. If training data is available, covariance-based features, point histogram features, and dataset-dependent features as well as combinations thereof are applied to classify points. Results achieved with a Random Forest classifier and non-local optimization using Markov Random Fields are analyzed for two challenging datasets: an airborne laser scan and a photogrammetrically reconstructed point cloud. As an application, surface reconstruction from the thus cleaned point sets is demonstrated.

Paper Nr: 7
Title:

A Hybrid Approach for Segmenting and Fitting Solid Primitives to 3D Point Clouds

Authors:

Markus Friedrich, Steffen Illium, Pierre-Alain Fayolle and Claudia Linnhoff-Popien

Abstract: The segmentation and fitting of solid primitives to 3D point clouds is a complex task. Existing systems are restricted either in the number of input points or the supported primitive types. This paper proposes a hybrid pipeline that is able to reconstruct spheres, bounded cylinders and rectangular cuboids on large point sets. It uses a combination of deep learning and classical RANSAC for primitive fitting, a DBSCAN-based clustering scheme for increased stability and a specialized Genetic Algorithm for robust cuboid extraction. In a detailed evaluation, its performance metrics are discussed and resulting solid primitive sets are visualized. The paper concludes with a discussion of the approach’s limitations.

Paper Nr: 10
Title:

MeshPipe: A Python-based Tool for Easy Automation and Demonstration of Geometry Processing Pipelines

Authors:

Joan Fons, Antoni Chica and Carlos Andujar

Abstract: The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the user's point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C++ library exposed to the viewer via Python-C++ bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.

Paper Nr: 11
Title:

Generation of Tree Surface Mesh Models from Point Clouds using Skin Surfaces

Authors:

Chi W. Lim, Like Gobeawan, Sum T. Wong, Daniel J. Wise, Peng Cheng, Hee J. Poh and Yi Su

Abstract: This work focuses on the extraction and reconstruction of the tree branching models from large scale 3D LiDAR point clouds, utilizing the concept of skin models for modelling tree joints. Tree joints are one of the most challenging components to model due to its potential for highly intricate morphology and complex branching topology. During the reconstruction process, point clouds first undergo a classification process to remove leaves and then skeletonization to derive the branching morphology and estimate individual branch thicknesses. The tree branching model can then be modelled as a collection of cylindrical volumes connected by fused tree joints. The novelty of this work lies in the usage of skin surfaces as a proxy for modelling the tree joints. The generated tree triangular mesh surface is smooth and continuous, and we further propose a method to convert it into quadrilateral patches. The benefits of having piecewise components of branches and joints are such that it facilitates the subsequent generation of 3D finite elements, as they can be handled and meshed independently.

Paper Nr: 13
Title:

Virtual Planning and Testing of AUV Paths for Underwater Photogrammetry

Authors:

Amy Lewis, Kolton Yager, Mitchell Keller, Bonita Galvan, Russell C. Bingham, Samantha Ting, Jane Wu, Timmy Gambin, Christopher Clark and Zoë J. Wood

Abstract: We introduce a system for automatically generating paths for autonomous underwater vehicles which optimize views of a site of interest. These paths can then be used to survey and map underwater sites of interest using photogrammetry. Paths are generated in a virtual world by a single-query probabilistic roadmap algorithm that quickly covers the configuration space and generates small maps with good coverage. The objective function used to compute the paths measures an approximate view coverage by casting rays from the virtual view to test for intersections with the region of interest, with added weight for views with high information gain. The motion planning algorithm was implemented in a virtual world that includes the ability to test paths and acquire views of the virtual scene for evaluation prior to real world deployment. To measure the effectiveness of our paths versus the commonly used pre-packaged lawnmower paths, photogrammetry reconstructions were compared using CloudCompare. The 3D reconstructions created from the views along the paths generated by our algorithm were more detailed and showed better coverage, creating point clouds with a mean distance between points ranging from 1.5 to 2.3 times better than that of the lawnmower pattern.

Paper Nr: 17
Title:

Parallel Reconstruction of Quad Only Meshes from Volume Data

Authors:

Roberto Grosso and Daniel Zint

Abstract: We present a method to reconstruct quad only meshes from volume data which mainly consists of two steps: reconstruction of a quad only mesh and topological simplification to reduce the number of irregular vertices. A novel algorithm is described that computes Dual Marching Cubes (DMC) meshes without using lookup tables. The meshes are topologically consistent across cell borders, i.e. they are watertight. The output of the algorithm is a quad only mesh stored in a halfedge data structure. Due to the transitions between voxel layers in volume data, meshes have numerous quad elements with vertices of valence 3􀀀X 􀀀3􀀀Y, where X;Y  5, and 3􀀀3􀀀3􀀀3. Hence, we simplify the mesh by eliminating these elements wherever possible. Finally, we briefly describe a CUDA implementation of the algorithms, which allows processing huge amounts of data on GPU at almost interactive time rates.

Paper Nr: 18
Title:

Automatic Generation of Affective 3D Virtual Environments from 2D Images

Authors:

Alberto Cannavò, Arianna D’Alessandro, Daniele Maglione, Giorgia Marullo, Congyi Zhang and Fabrizio Lamberti

Abstract: Today, a wide range of domains encompassing, e.g., movie and video game production, virtual reality simulations, augmented reality applications, make a massive use of 3D computer generated assets. Although many graphics suites already offer a large set of tools and functionalities to manage the creation of such contents, they are usually characterized by a steep learning curve. This aspect could make it difficult for non-expert users to create 3D scenes for, e.g., sharing their ideas or for prototyping purposes. This paper presents a computer-based system that is able to generate a possible reconstruction of a 3D scene depicted in a 2D image, by inferring objects, materials, textures, lights, and camera required for rendering. The integration of the proposed system into a well-known graphics suite enables further refinements of the generated scene using traditional techniques. Moreover, the system allows the users to explore the scene into an immersive virtual environment for better understanding the current objects’ layout, and provides the possibility to convey emotions through specific aspects of the generated scene. The paper also reports the results of a user study that was carried out to evaluate the usability of the proposed system from different perspectives.

Paper Nr: 22
Title:

Transparent Parallelization of Enrichment Operations in Geometric Modeling

Authors:

Pierre Bourquat, Hakim Belhaouari, Philippe Meseure, Valentin Gauthier and Agnès Arnould

Abstract: This paper presents an approach to automatically and transparently parallelize algorithms to build 2D or 3D virtual objects in geometric modeling: In particular, we show that subdivision and Iterated Function System constructions can be parallelized, without any explicit parallelization study by their developer. These operations are described in the framework Jerboa, where each operation is expressed as a graph transformation and objects are topologically described using generalized maps. All transformations are handled by a generic engine that can handle structure changes in parallel. The obtained results allow any designer of virtual environments to rely on modern multi-core and multi-processor architectures to get faster constructions of complex objects without any skills on parallelism.

Paper Nr: 32
Title:

Creating Curvature Adapted Subdivision Control Meshes from Scan Data

Authors:

Simon Kloiber and Ursula H. Augsdörfer

Abstract: Often, designers have real-life models which need to be converted to a mathematical representation for further processing. For the designer to be able to manipulate the data sensibly and in a controlled manner the number of data points have to be reduced. However, if the new reduced representation of the shape is sparse everywhere, high frequency detail in the model will be lost. In this work we modify an existing quad meshing algorithm to convert a dense triangle mesh capturing the shape of the real-life model to a quad-dominant mesh of varying density. Our distribution of vertices allows to represent high frequency features in the surface, without increasing the density of the mesh elsewhere unnecessarily. Our quad mesh approximates the scan data up to a predefined error margin. This quad mesh is then transformed into a subdivision control mesh, which corresponds to a limit subdivision surface which closely resembles the scan data.

Paper Nr: 46
Title:

Single Sketch Image based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning

Authors:

Naoki Nozawa, Hubert H. Shum, Edmond L. Ho and Shigeo Morishima

Abstract: Efficient car shape design is a challenging problem in both the automotive industry and the computer animation/games industry. In this paper, we present a system to reconstruct the 3D car shape from a single 2D sketch image. To learn the correlation between 2D sketches and 3D cars, we propose a Variational Autoencoder deep neural network that takes a 2D sketch and generates a set of multi-view depth and mask images, which form a more effective representation comparing to 3D meshes, and can be effectively fused to generate a 3D car shape. Since global models like deep learning have limited capacity to reconstruct fine-detail features, we propose a local lazy learning approach that constructs a small subspace based on a few relevant car samples in the database. Due to the small size of such a subspace, fine details can be represented effectively with a small number of parameters. With a low-cost optimization process, a high-quality car shape with detailed features is created. Experimental results show that the system performs consistently to create highly realistic cars of substantially different shape and topology.

Short Papers
Paper Nr: 1
Title:

RANSAC for Aligned Planes with Application to Roof Plane Detection in Point Clouds

Authors:

Steffen Goebbels and Regina Pohle-Fröhlich

Abstract: Random Sample Consensus (RANSAC) is a standard algorithm to recognize planes in point clouds. It does not require additional context information. However, it might be applied in situations where results can be improved based on domain knowledge. Such a situation is 3D building reconstruction from airborne laser scanning data. The normals of many roof facets are orthogonal to footprint vectors. This specific property helps to estimate roof planes more precisely. The paper describes the adapted RANSAC algorithm. It can be also used in other applications in which planes are aligned to supporting vectors.

Paper Nr: 23
Title:

Matching-aware Shape Simplification

Authors:

Enrico S. Miranda, Rogério C. Costa, Paulo Dias and José Moreira

Abstract: Current research has shown significant interest in spatio-temporal data. The acquisition of spatio-temporal data usually begins with the segmentation of the objects of interest from raw data, which are then simplified and represented as polygons (contours). However, the simplification is usually performed individually, i.e., one polygon at a time, without considering additional information that can be inferred by looking at the correspondences between the polygons obtained from consecutive snapshots. This can reduce the quality of polygon matching, as the simplification algorithm may choose to remove vertices that would be relevant for the matching and maintain other less relevant ones. This causes undesired situations like unmatched vertices and multiple matched vertices. This paper presents a new methodology for polygon simplification that operates on pairs of shapes. The aim is to reduce the occurrence of unmatched and multiple matched vertices, while maintaining relevant vertices for image representation. We evaluated our method on synthetic and real world data and performed an extensive comparative study with two well-known simplification algorithms. The results show that our method outperforms current simplification algorithms, as it reduces the amount of unmatched vertexes and of vertexes with multiple matches.

Paper Nr: 39
Title:

Open Problems in 3D Model and Data Management

Authors:

René Berndt, Carl Tuemmler, Christian Kehl, Mario Aehnelt, Tim Grasser, Andreas Franek and Torsten Ullrich

Abstract: In interdisciplinary, cooperative projects that involve different representations of 3D models (such as CAD data and simulation data), a version problem can occur: different representations and parts have to be merged to form a holistic view of all relevant aspects. The individual partial models may be exported by and modified in different software environments. These modifications are a recurring activity and may be carried out again and again during the progress of the project. This position paper investigates the version problem; furthermore, this contribution is intended to stimulate discussion on how the problem can be solved.

Paper Nr: 42
Title:

Image-based Material Editing for Making Reflective Objects Fluorescent

Authors:

Daichi Hidaka and Takahiro Okabe

Abstract: Fluorescent materials give us a unique sense of quality such as self-luminous ones, because they absorb light with certain wavelengths and then emit light with longer wavelengths. The existing methods for image-based material editing make objects in an image specular, translucent, and transparent, but they do not address fluorescent materials. In this paper, we propose a method for making reflective objects in a single input image fluorescent by adding photorealistic fluorescent components to the objects of interest. Specifically, we show that photometrically consistent fluorescent components can approximately be represented by using the 3-band (RGB) spectral irradiance on the surface of a reflective object, and then compute the fluorescent components on the basis of intrinsic image decomposition without explicitly estimating the object’s shape and the light sources illuminating it from the input image. We conducted a number of experiments using both synthetic and real images, and confirmed that our proposed method is effective for making reflective objects fluorescent.

Paper Nr: 47
Title:

Study on the Average Size of the Longest-Edge Propagation Path for Triangulations

Authors:

Oliver-Amadeo V. Huayta and María-Cecilia Rivara

Abstract: For a triangle t in a triangulation τ, the “longest edge propagating path” Lepp(t), is a finite sequence of neighbor triangles with increasing longest edges. In this paper we study mathematical properties of the LEPP construct. We prove that the average LEPP size over triangulations of random points sets, is between 2 and 4 with standard deviation less than or equal to √6. Then by using analysis of variance and regression analysis we study the statistical behavior of the average LEPP size for triangulations of random point sets obtained with uniform, normal, normal bivariate and exponential distributions. We provide experimental results for verifying that the average LEPP size is in agreement with the analytically derived one.

Paper Nr: 50
Title:

Towards Automatic CAD Modeling from 3D Scan Sketch based Representation

Authors:

Abd R. Shabayek, Djamila Aouada, Kseniya Cherenkova and Gleb Gusev

Abstract: This paper proposes a novel approach to convert a 3D scan to its CAD counterpart. The objective is to extract intermediate sketch planes that well represent the input scan and are close enough to the original design intent. These sketches can then be easily converted into CAD models automatically due to their faithful representation of the input geometry. One objective is to avoid incorporating user/company dependent content in the CAD reconstruction process. The intermediate representation shall be directly supported in any CAD environment to boost the designer’s work without the need of supplementary (model conversion, automatic feature recognition) steps. Nowadays, it is common to digitize an object and reconstruct its geometric primitives. However, this reconstruction contains only geometry. In literature, the final goal might be met by recovering the modeling tree itself, by means of automatic feature recognition, and converting to the proper format of a specific CAD software package. However, the constructed tree and its conversion introduce issues in the reconstruction process. The definition of an exact modeling tree, and the production of a meaningful final CAD model are rather hard to obtain. This imposes a rather inefficient working method, thereby heavily impacting the designer’s modeling skills.

Paper Nr: 3
Title:

Context-aware Patch-based Method for Façade Inpainting

Authors:

Benedikt Kottler, Dimitri Bulatov and Zhang Xingzi

Abstract: Realistic representations of 3D urban scenes is an important aspect of scene understanding and has many applications. Given untextured polyhedral Level-of-Detail 2 (LoD2) models of building and imaging containing façade textures, occlusions caused by foreground objects are an essential disturbing factor of façade textures. We developed a modification of a well-known patch-based inpainting method and used the knowledge about façade details in order to improve the façade inpainting of occlusions. Our modification focuses on suppression of undesired, superfluous repetitions of textures. To achieve this, a coarse inpainting result by a structural-based method is used to influence the choice of the best patch so that homogeneous regions are preferred. The coarse inpainting is calculated using the context knowledge and average color instead of traditionally applied arbitrary structural inpainting. Our modification furthermore introduces a parameter that allows to weight the influence of the coarse inpainting. A parameter study shows that this parameter can be chosen intuitively and does not require any parameter choice method. The cleaned façade textures could be successfully integrated into the accordingly adjusted building models thus upgrading them to LoD3.

Paper Nr: 20
Title:

Graph based Method for Online Handwritten Character Recognition

Authors:

Rabiaa Zitouni, Hala Bezine and Najet Arous

Abstract: In this research, we attempt to propose a novel graph-based approach for online handwritten character recognition. Unlike the most well-known online handwritten recognition methods, which are based on statistical representations, we set forward a new approach based on structural representation to overcome the inherent deformations of handwritten characters. An Attributed Relational Graph (ARG) is dedicated to allowing the direct labeling of nodes (strokes) and edges (relationships) of a graph to model the input character. Each node is characterized by a set of fuzzy membership degrees describing their properties (type, size). Fuzzy description is invested in order to guarantee more robustness against uncertainty, ambiguity and vagueness. ARGs edges stand for spatial relationships between different strokes. At a subsequent stage, a tree-search based optimal matching algorithm is explored, which allows the search for character structures i.e the minimum cost of nodes. Experiments performed on ADAB and IRONOFF datasets, reveal promising results. In particular, the comparison with the state of the art demonstrates the significance of the proposed system.

Paper Nr: 28
Title:

Efficient Visualization and Set-theoretic Difference Operations for Accurate Geometric Modeling in Real-time Simulations

Authors:

Alexander Leutgeb, Michael F. Hava and Alexander H. Leitner

Abstract: We present a novel approach, which supports efficient visualization and set-theoretic difference operations for accurate geometric modelling in real-time simulations. The geometric operands can be in any representation as long as they are watertight and are convertible to an oriented point cloud. A novel region growing based fitting routine converts the oriented point clouds into a watertight piecewise quadratic implicit representation. During set-theoretic modeling these implicit representations are mapped to the fixed depth hierarchical grid of the resulting geometric model. Thereby a surface elimination algorithm removes parts not contributing to the final surface. This guarantees that the number of already performed modeling operations have only a minor performance impact on the algorithms processing the model. For visualization a novel ray casting based approach was developed, enabling interactive frame rates at Full-HD screen resolutions. The evaluation of the developed method proves its modeling performance and high geometric accuracy by means of the simulation of subtractive manufacturing examples.

Area 2 - Rendering

Short Papers
Paper Nr: 12
Title:

Outdoor Illumination Estimation for Mobile Augmented Reality: Real-time Analysis of Shadow and Lit Surfaces to Measure the Daylight Illumination

Authors:

Fulvio Bertolini and Claus B. Madsen

Abstract: A realistic illumination model in Augmented Reality (AR) applications is crucial for perceiving virtual objects as real. In order to correctly blend digital content with the physical world it is necessary to measure, in real time, the illumination present in the scene surrounding the user. The paper proposes a novel solution for real-time estimation of outdoor illumination conditions, based on the video stream from the camera on handheld devices. The problem is formulated in a radiometric framework, showing how the reflected radiance from the surface maps to pixel values, and how the reflected radiance relates to surface reflectance and the illumination environment. From this we derive how to estimate the color and intensity of the sun and sky illumination, respectively, using areas in the video stream that are in direct sunlight and in shadow. The presented approach allows for rendering augmentations that adapt in real-time to dynamically changing outdoor illumination conditions.

Paper Nr: 15
Title:

A Real-time Ultrasound Rendering with Model-based Tissue Deformation for Needle Insertion

Authors:

Charles Barnouin, Florence Zara and Fabrice Jaillet

Abstract: In the course of developing a training simulator for puncture, a novel approach is proposed to render in real-time the ultrasound (US) image of any 3D model. It is combined with the deformation of the soft tissues (due to their interactions with a needle and a probe) according to their physical properties. Our solution reproduces the usual US artifacts at a low cost. It combines the use of textures and ray-tracing with a new way to efficiently render fibrous tissues. Deformations are handled in real-time on the GPU using displacement functions. Our approach goes beyond the usual bottleneck of real-time deformations of 3D models in interactive US simulation. The display of tissues deformation and the possibilities to tune the 3D morphotypes, tissue properties, needle shape, or even specific probe characteristics, is clearly an advantage in such a training environment.

Paper Nr: 19
Title:

Atlas Shrugged: Device-agnostic Radiance Megatextures

Authors:

Mark Magro, Keith Bugeja, Sandro Spina, Kevin Napoli and Adrian De Barro

Abstract: This paper proposes a novel distributed rendering pipeline for highly responsive high-fidelity graphics based on the concept of device-agnostic radiance megatextures (DARM), a network-based out-of-core algorithm that circumvents VRAM limitations without sacrificing texture variety. After an automatic precomputation stage generates the sparse virtual texture layout for rigid bodies in the scene, the server end of the pipeline populates and updates surface radiance in the texture. On demand, connected clients receive geometry and texture information selectively, completing the pipeline by asynchronously reconstituting these data into a frame using GPUs with minimal functionality. A client-side caching system makes DARM robust to network fluctuations. Furthermore, users can immediately start consuming the service without the need for lengthy downloads or installation processes. DARM was evaluated on its effectiveness as a vehicle for bringing hardware-accelerated ray tracing to various device classes, including smartphones and single board computers. Results show that DARM is effective at allowing these devices to visualise high quality ray traced output at high frame rates and low response times.

Paper Nr: 45
Title:

Machine Learning is the Solution Also for Foveated Path Tracing Reconstruction

Authors:

Atro Lotvonen, Matias Koskela and Pekka Jääskeläinen

Abstract: Real-time photorealistic rendering requires a lot of computational power. Foveated rendering reduces the work by focusing the effort to where the user is looking, but the very sparse sampling in the periphery requires fast reconstruction algorithms with good quality. The problem is even more complicated in the field of foveated path tracing where the sparse samples are also noisy. In this position paper we argue that machine learning and data-driven methods play an important role in the future of real-time foveated rendering. In order to show initial proofs to support this opinion, we propose a preliminary machine learning based method which is able to improve the reconstruction quality of foveated path traced image by using spatio-temporal input data. Moreover, the method is able to run in the same reduced foveated resolution as the path tracing setup. The reconstruction using the preliminary network is about 2.9ms per 658×960 frame on a GeForce RTX 2080 Ti GPU.

Paper Nr: 48
Title:

Challenges of Visually Realistic Augmented Reality

Authors:

Claus B. Madsen

Abstract: Why is achieving real-time handheld visually realistic Augmented Reality so hard? What are the main challenges? We present an overview of these challenges, and discuss the most important issues involved in developing AR that automatically adapts to changes in the environment, specifically the illumination conditions. We then move on to present how we see a path of research going forward for the immediate future; a path based partly on recent advances in real-time 3D modelling and partly on lessons learned from a decade of Augmented Reality illumination estimation research.

Paper Nr: 2
Title:

Adoption of Sparse 3D Textures for Voxel Cone Tracing in Real Time Global Illumination

Authors:

Igor Aherne, Richard Davison, Gary Ushaw and Graham Morgan

Abstract: The enhancement of 3D scenes using indirect illumination brings increased realism. As indirect illumination is computationally expensive, significant research effort has been made in lowering resource requirements while maintaining fidelity. State-of-the-art approaches, such as voxel cone tracing, exploit the parallel nature of the GPU to achieve real-time solutions. However, such approaches require bespoke GPU code which is not tightly aligned to the graphics pipeline in hardware. This results in a reduced ability to leverage the latest dedicated GPU hardware graphics techniques. In this paper we present a solution that utilises GPU supported sparse 3D texture maps. In doing so we provide an engineered solution that is more integrated with the latest GPU hardware than existing approaches to indirect illumination. We demonstrate that our approach not only provides a more optimal solution, but will benefit from the planned future enhancements of sparse 3D texture support expected in GPU development.

Area 3 - Animation and Simulation

Full Papers
Paper Nr: 9
Title:

Expanded Virtual Puppeteering

Authors:

Luiz Velho and Bernard Lupiac

Abstract: This work proposes a framework for digital puppeteering performances, using solely the performer’s bare hands. The framework relies on Unity as a base, and hand information is captured from a Leap Motion device. The performer employs a gesture interaction system and precise hand movements to manipulate the puppet in a number of different manners. It is then possible for the audience to view the puppet directly in a big screen, or using virtual or augmented reality headsets, which allows rich interactions.

Area 4 - Interactive Environments

Full Papers
Paper Nr: 4
Title:

Impact of First Person Avatar Representation in Assembly Simulations on Perceived Presence and Acceptance

Authors:

Jennifer Brade, Alexander Kögel, Christian Fuchs and Philipp Klimant

Abstract: This article reports the impact of three different avatar representations on perceived presence and acceptance during an assembly task. The conducted experiment focuses not on the perceived virtual body ownership, but on the limited visibility of the virtual body during a task at a workbench – meaning the view on hands and forearms. The initial question is, if a detailed avatar, which is time-consuming to develop, is needed during a virtual assembly task or if the impact on presence and acceptance caused by the kind of avatar visualisation is negligible. Therefore, three different kinds of avatar representations were used to examine the influence of the avatar on the perceived presence and acceptance. The results of the experiment show that there are no significant differences between the three kinds of avatar representations. All three avatars reach high values for presence and acceptance. Therefore, a partial-body representation is sufficient to obtain a high presence and acceptance level in scenarios which focus on manual tasks on or above a work bench.

Paper Nr: 8
Title:

A Unified Design & Development Framework for Mixed Interactive Systems

Authors:

Guillaume Bataille, Valérie Gouranton, Jérémy Lacoche, Danielle Pelé and Bruno Arnaldi

Abstract: Mixed reality, natural user interfaces and the internet of things are complementary computing paradigms. They converge towards news form of interactive systems named mixed interactive systems. Because of their exploding complexity, mixed interactive systems induce new challenges for designers and developers. We need new abstractions of these systems in order to describe their real-virtual interplay. We also need to break mixed interactive systems down into pieces in order to segment their complexity into comprehensible subsystems. This paper presents a framework to enhance the design and development of these systems. We propose a model unifying the paradigms of mixed reality, natural user interfaces and the internet of things. Our model decomposes a mixed interactive system into a graph of mixed entities. Our framework implements this model, which facilitates interactions between users, mixed reality devices and connected objects. In order to demonstrate our approach, we present how designers and developers can use this framework to develop a mixed interactive system dedicated to smart building occupants.

Paper Nr: 31
Title:

Design of a Motion-based Evaluation Process in Any Unity 3D Simulation for Human Learning

Authors:

Djadja D. Djadja, Ludovic Hamon and Sébastien George

Abstract: This paper discusses the usability of a generic method for the evaluation of the user activity in Virtual Learning Environments (VLE) and its implementation with Unity. In the context of motion-based tasks, the learning process relies on the observation and imitation of the task demonstrated by the teacher. The learner task is compared to the teacher one in terms of: (a) motions shape of the user and the manipulated artefacts and (b), the sequential order of 3D checkpoints that the user must collide with. The integration of the evaluation system into any existing VLE rises challenges regarding the system architecture and the Human Computer Interface to set up the evaluation process. A usability test related to the design of this process is conducted for a pool shooting, a dart throwing and a letter writing simulation. The preliminary results show that: (i) the integration of an existing VLE into the evaluation system is feasible despite issues related to the interaction assets and (ii), all participants are satisfied by their designed evaluation process for pool shooting and dart throwing, they were unable to set up a satisfying evaluation for letter writing due to scale issues.

Paper Nr: 40
Title:

MIST: A Multi-sensory Immersive Stimulation Therapy Sandbox Room

Authors:

Bruno Ferreira, Gustavo Assunção and Paulo Menezes

Abstract: Multi-Sensory Stimulation Environments are a form a therapy in which a patient is exposed to a set of controlled stimuli on various sensory modalities, including visual and auditory, in order to induce some desired physical or mental state. Despite their success, these environments are still not widely known or available to everyone today. However, through the use of virtual reality, that can be made possible and even have its effects boosted thanks to the observed benefits of VR usage in conventional therapy. Thus in this work we propose a virtual reality implementation of an immersive controlled stimulation environment, customizable and adaptive to a user’s response to stimulus, which can be used as a simple mobile app added to a phone VR headset. Initial experimentation with the platform has been very positive making it highly promising for a future validation of its therapeutic use.

Paper Nr: 44
Title:

Interactive Axis-based 3D Rotation Specification using Image Skeletons

Authors:

Xiaorui Zhai, Xingyu Chen, Lingyun Yu and Alexandru Telea

Abstract: Specifying 3D rotations of shapes around arbitrary axes is not easy to do. We present a new method for this task, based on the concept of natural local rotation axes. We define such axes using the 3D curve skeleton of the shape of interest. We compute effective and efficient approximations of such skeletons using the 2D projection of the shape. Our method allows users to specify 3D rotations around parts of arbitrary 3D shapes with a single click or touch, is simple to implement, works in real time for large scenes, can be easily added to any OpenGL-based scene viewer, and can be used on both mouse-based and touch interfaces.

Short Papers
Paper Nr: 16
Title:

An Interactive Application Framework for Natural Parks using Serious Location-based Games with Augmented Reality

Authors:

Liliana Santos, Nuno Silva, Rui Nóbrega, Rubim Almeida and António Coelho

Abstract: Park visitors and tourists, in general, seek new experiences, leading to a growing search for ways to create more memorable experiences. Some technological solutions, such as Augmented Reality, have proved that they can be useful to create more immersive and interactive experiences, both in entertainment and education. An application with Augmented Reality, using location-based services in a gamified way, can create pleasant and entertaining outdoor experiences without losing its pedagogical ability, making it a promising fit for a nature park. We propose a conceptual framework for creating these mobile applications for nature parks. From which, a mobile application was prototyped with location-based services and augmented reality interactive experiences, with the purpose of disseminating scientific knowledge about the fauna and flora of a nature park. Gaming elements are also introduced in the application’s design to try and improve the engagement and involvement in the various activities of the application and its contents. User tests were performed during the development of the prototype and with the final version. The results allow us to conclude that this type of applications can improve the visitors’ experience while at the same time, improve the dissemination of scientific knowledge.

Paper Nr: 21
Title:

Integrating Assembly Process Design and VR-based Evaluation using the Unreal Engine

Authors:

Simon Kloiber, Christoph Schinko, Volker Settgast, Martin Weinzerl, Tobias Schreck and Reinhold Preiner

Abstract: To compete in industrial production and assembly design, companies must implement fast and efficient workflows for the design of assembly processes. To date, these workflows comprise multiple stages that typically cover a heterogeneous set of designer competences, used tools and data. We present a concept for an integrated assembly process design workflow with VR-based evaluation and training methods leveraging the flexibility and functionality of a modern game engine. Our approach maps the required tools onto off-the-shelf features of these engines. This ensures an easy integration of our workflow into existing industry processes and allows quick results, which support fast prototyping. Furthermore, Virtual Reality based previews and evaluations significantly reduce the need for physical workstation prototypes, allowing for quicker feedback and evaluation and early customer integration. We apply and evaluate our concept on an industrial assembly use case for automotive traction batteries and give detailed insights into its adoption in practice and the advantages over proprietary implementations.

Paper Nr: 24
Title:

An Indoor Navigation System for Reduced Mobility Users

Authors:

Pedro Cardoso, Ana P. Cláudio and Dulce Domingos

Abstract: Indoor navigation systems help pedestrians to find the best paths inside buildings. Existing systems only respond to the needs of reduced mobility users occasionally, by avoiding stairs. However, this is an obvious requirement, unlike others that are almost invisible to people without restrictions. This paper presents the results of the development steps of an indoor navigation system for reduced mobility users. In addition, we systematize the relevant information about the indoor environment that must be gathered to instantiate the requirements to a specific case. Finally, the paper overviews the developed prototype and describes its evaluation.

Paper Nr: 33
Title:

Localization Limitations of ARCore, ARKit, and Hololens in Dynamic Large-scale Industry Environments

Authors:

Tobias Feigl, Andreas Porada, Steve Steiner, Christoffer Löffler, Christopher Mutschler and Michael Philippsen

Abstract: Augmented Reality (AR) systems are envisioned to soon be used as smart tools across many Industry 4.0 scenarios. The main promise is that such systems will make workers more productive when they can obtain additional situationally coordinated information both seemlessly and hands-free. This paper studies the applicability of today’s popular AR systems (Apple ARKit, Google ARCore, and Microsoft Hololens) in such an industrial context (large area of 1,600m2, long walking distances of 60m between cubicles, and dynamic environments with volatile natural features). With an elaborate measurement campaign that employs a sub-millimeter accurate optical localization system, we show that for such a context, i.e., when a reliable and accurate tracking of a user matters, the Simultaneous Localization and Mapping (SLAM) techniques of these AR systems are a showstopper. Out of the box, these AR systems are far from useful even for normal motion behavior. They accumulate an average error of about 17m per 120m, with a scaling error of up to 14.4cm/m that is quasi-directly proportional to the path length. By adding natural features, the tracking reliability can be improved, but not enough.

Paper Nr: 34
Title:

Generative Choreographies: The Performance Dramaturgy of the Machine

Authors:

Esbern T. Kaspersen, Dawid Górny, Cumhur Erkut and George Palamas

Abstract: This paper presents an approach for a full body interactive environment in which performers manipulate virtual actors in order to augment a live performance. The aim of this research is to explore the role of generative animation to serve an interactive performance, as a dramaturgical approach in new media. The proposed system consists of three machine learning modules encoding a human’s movement into generative dance, performed by an avatar in a virtual world. First, we provide a detailed description of the technical aspects of the system. Afterwards, we discuss the critical aspects summarized on the basis of dance practice and new media technologies. In the process of this discussion, we emphasize the ability of the system to conform with a movement style and communicate choreographic semiotics, affording artists with new ways of engagement with their audiences.

Paper Nr: 37
Title:

Preliminary Study on the Use of Off-the-Shelf VR Controllers for Vibrotactile Differentiation of Levels of Roughness on Meshes

Authors:

Ivan Nikolov, Jens S. Høngaard, Martin Kraus and Claus B. Madsen

Abstract: With the introduction of new specialized hardware, Virtual Reality (VR) has gained more and more popularity in recent years. VR is particularly immersive if suitable auditory and haptic feedback is provided to users. Many proposed forms of haptic feedback require custom hardware components that are often bulky, costly, and/or require lengthy setup times. We explored the possibility of using the built-in vibrotactile feedback of HTC Vive controllers to simulate the sensation of interacting with surfaces with varying degrees of roughness. We conducted initial testing on the proposed system, which shows promising results as users could accurately and within short time discern the amount of roughness of 3D models based on the vibrotactile feedback alone.

Paper Nr: 38
Title:

On the Preference for Travel by Steering in a Virtual Reality Game

Authors:

Martin Kraus

Abstract: Travel is one of the most important tasks in virtual reality (VR) experiences. Paradoxically, the most popular travel techniques in virtual reality games are known to be more likely to cause cybersickness than some of the less popular travel techniques. Recently, at least one VR gaming company shared quantitative data on this issue. In an attempt to explain this data, this work argues that steering techniques might result in stronger immersion, better physical ergonomics, and more pleasure than offered by teleportation techniques. Furthermore, trends are identified that might reduce the preference for steering techniques in the future. The presented discussion of current and future preferences for steering techniques in VR games might help to better understand and design for the needs of VR players.

Paper Nr: 51
Title:

Virtual Reality Environment for the Validation of Bone Fracture Reduction Processes

Authors:

J. J. Jiménez-Delgado, A. Calzado-Martínez, F. D. Pérez-Cano and A. Luque-Luque

Abstract: This work presents a virtual environment for the validation by experts of computer-assisted bone fracture reduction. This environment is composed of VR glasses and 3D controllers (HTC Vive) that allow interaction and immersion in the scene in a realistic way. The virtual environment developed allows loading fractured bone models (fragments) so that the specialist performs a virtual fracture reduction and its results can be used for the validation of algorithms and assisted reduction techniques. Once the fragments are loaded, the user can perform an interactive reduction of the fracture, visualizing the fragments in 3d from different views, moving the fragments in 3d and placing the fragments in space to observe the reduction in detail. Once completed, it allows the reduction to be exported so that it can be compared with other fracture reduction systems. The system has been tested by specialists in traumatology and a usability study has been carried out. Finally, the system has been empirically validated and used to compare the performance of other computer-assisted reduction systems.

Paper Nr: 5
Title:

Scenario-based VR Framework for Product Design

Authors:

Romain Terrier, Valérie Gouranton, Cédric Bach, Nico Pallamin and Bruno Arnaldi

Abstract: Virtual Reality (VR) applications are promising solutions in supporting design processes across multiple domains. In complex systems (e.g., machines, cities, interior layouts), VR applications are used alongside Computer Assisted Design (CAD) systems which are (1) rigid (i.e., they lack customization), and (2) limit the design iterations. VR systems need to address these shortcomings so that they can become widespread and adaptable across design domains. We thus propose a new VR Framework based on scenarios and a new generic theoretical design model to assist developers in creating versatile and personalized applications for designers. The new generic theoretical model describes the common design activities shared by many design domains, and the scenario depicts the design model to allow design iterations in VR. Through scenarios, the VR Framework enables creating customized copies of the generic design process to fulfill the needs of each design domain. The customization capability of our solution is illustrated on a use case.

Paper Nr: 36
Title:

Fog of Story: Design, Implementation and Evaluation of a Post-processing Technique to Guide Users’ Point of View in cinematic Virtual Reality (cVR) Experiences

Authors:

Jose L. Soler-Dominguez and Carlos Gonzalez

Abstract: The impact of Virtual Reality (VR) as a narrative medium is growing quickly. Opposite to traditional films, in cinematic VR (cVR) experiences, even when interactions are usually reduced to navigation, users are free to move the camera at their will and could miss relevant scenes while looking at unexpected places inside the virtual environment. Different visual cues have been developed to attract user’s attention and to make them focus on the main narrative stream. Those visual cues usually interfere with the actual storytelling, introducing alien elements and overloading graphically the scene. In this paper, we propose a visual postprocessing technique that applied to a VR camera will guide the user to look at where the relevant narrative events are expected to happen using dynamic visual layers. This technique, narratively aseptic, could be applied to different storytelling scenarios and is based on the Gaussian blur effect: the greater the angle between the user’s vision and the area of interest is, the more blurred the content will be displayed. Moreover, a visual guide is displayed to help the user to know at every moment the way to the area of interest. GPU shaders are used in order to not affect the performance. Additionally, metrics will be proposed in order to measure the effects of this technique on presence and agency, the most significant subjective parameters of User Experience in VR.

Paper Nr: 49
Title:

3D Augmented Reality Tangible User Interface using Commodity Hardware

Authors:

Dimitris Chamzas and Konstantinos Moustakas

Abstract: During the last years, the emerging field of Augmented & Virtual Reality (AR-VR) has seen tremendous growth. An interface that has also become very popular for the AR systems is the tangible interface or passive-haptic interface. Specifically, an interface where users can manipulate digital information with input devices that are physical objects. This work presents a low cost Augmented Reality system with a tangible interface that offers interaction between the real and the virtual world. The system estimates in real-time the 3D position of a small colored ball (input device), it maps it to the 3D virtual world and then uses it to control the AR application that runs in a mobile device. Using the 3D position of our “input” device, it allows us to implement more complicated interactivity compared to a 2D input device. Finally, we present a simple, fast and robust algorithm that can estimate the corners of a convex quadrangle. The proposed algorithm is suitable for the fast registration of markers and significantly improves performance compared to the state of the art.

Paper Nr: 57
Title:

Improving Mood for People with Depressive Disorders: Designing and Developing a VR Game

Authors:

Alice J. Lin, Charles B. Chen and Fuhua (. Cheng

Abstract: Mood disorders can have a significant psychological impact on many groups of people. It causes severe disease burden and can lead to many effects that decrease the quality of life of an individual. Video games are popular forms of entertainment which can help improve a person’s mood and decrease their depressive symptoms. In the paper, we design and develop a prototype VR game to help people with depressive disorders improve their mood. We have performed preliminary testing showing encouraging results in improving peoples’ moods.

Paper Nr: 58
Title:

A Taxonomy of Augmented Reality Annotations

Authors:

Inma García-Pereira, Jesús Gimeno, Pedro Morillo and Pablo Casanova-Salas

Abstract: Annotations have become a major trend in Augmented Reality (AR), as they are a powerful way of offering users more information about the real world surrounding them. There are many contributions showing ad hoc tools for annotation purposes, which make use of this type of virtual information. However, there are very few works that have tried to theorize on this subject to propose a generalized work system that solves the problem of incompatibility between applications. In this work, we propose and develop not only a taxonomy, but also a data model that seek to define the general characteristics that any AR annotation must incorporate. With this, we intend to provide a framework that can be used in the development of any system that makes use of this type of virtual elements.