HUCAPP 2020 Abstracts


Area 1 - Agents and Human Interaction

Full Papers
Paper Nr: 9
Title:

The Effects of Ingroup Bias on Public Speaking Anxiety in Virtual Reality

Authors:

Lotte J. Biesmans, Pleun M. van Hees, Lisa E. Rombout, Maryam Alimardani and Eriko Fukuda

Abstract: Virtual agents can be powerful elements in virtual reality (VR) applications, as their influence on user experience is governed by complex social mechanisms. Public speaking offers a relatively high-stakes situation that involves interaction with virtual agents. We examined the effects of an ingroup versus outgroup virtual audience on public speaking anxiety (PSA). Additionally, we looked at how emotional intelligence and VR ecological validity modified these effects. Results indicated that the VR application succeeded in evoking ingroup bias, and that in the ingroup condition, self-reported PSA was related to general PSA. Emotional intelligence was also a significant moderator. Additionally, audience type influenced the level of presence experienced by the user: ingroup audiences result in a higher level of presence. This study identifies potential areas of interest for future research, approaches that could influence users in specific and measurable ways in applications involving virtual social interaction, as well as the personalization of these virtual experiences.

Paper Nr: 39
Title:

Intention Indication for Human Aware Robot Navigation

Authors:

Oskar Palinko, Eduardo R. Ramirez, William K. Juel, Norbert Krüger and Leon Bodenhagen

Abstract: Robots are gradually making their ways from factory floors to our everyday living environments. Mobile robots are becoming more ubiquitous in many domains: logistics, entertainment, security, healthcare, etc. For robots to enter the everyday human environment they need to understand us and make themselves understood. In other words, they need to make their intentions clear to people. This is especially important regarding intentions of movement: when robots are starting, stopping, turning left, right, etc. In this study we explore three different ways for a wheeled mobile robot to communicate its intentions on which way it will go at a hallway intersection: one analogous to automotive signaling, another based on movement gesture and as a third option a novel light signal. We recorded videos of the robot approaching an intersection with the given methods and asked subjects via a survey to predict the robot’s actions. The car analogy and turn gesture performed adequately, while the novel light signal less so. In the following we describe the setup and outcomes of this study, as well as give suggestions on how mobile robots should signal in indoor spaces based on our findings.

Paper Nr: 42
Title:

Quote Surfing in Music and Movies with an Emotional Flavor

Authors:

Vasco Serra and Teresa Chambel

Abstract: We all go through the situation where we listen to a movie or a song lyrics quote and without giving it much thought, immediately know where it comes from, like an instant and often emotional memory. It is also very common to be in the opposite scenario where we try hard to remember where we know these words from, want to discover, and also find it interesting to see them quoted in different contexts. These situations remind us of the importance of quotes in movies and music, that sometimes get more popular than the movie or song they belong to. In fact, quotes, music and movies are among the contents that we most treasure, for their emotional impact and the power to entertain and inspire us. In this paper, we present the motivation and the interactive support for quotes in As Music Goes By, giving users a chance of searching and surfing quotes in music and movies. Users can find, explore and compare related contents, and access quotes in a contextualized way in the movies or song lyrics where they appear. The preliminary user evaluation results, focusing on perceived usefulness, usability and user experience, were very encouraging, proving the concept, informing refinements and new developments, already being addressed. Users valued most the search, navigation, contextualization and emotional flavor: to be able to access and compare quotes in movies and in lyrics, to navigate across movies and songs, the emotional dimension and representation also for quotes. Future work will lead us further with the focus on rich, flexible and contextualized interactive access to quotes, music and movies, aiming for increased understanding of their meaning and relations, chances for serendipitous discoveries and to get inspired and moved by these media that we treasure.

Paper Nr: 44
Title:

Memorable and Emotional Media Moments: Reminding Yourself of the Good Things!

Authors:

Teresa Chambel and Pedro Carvalho

Abstract: Experiencing digital media content is among the most accessible and beloved recreational activities people indulge in. It can promote learning and creative thinking, as well as being an important source of entertainment, with a great impact in our emotions. In particular, it has the power to foster positive emotions and attitudes, to regulate or enhance our mood, contributing to our general sense of wellbeing and quality of life. This paper discusses and explores the potential of media and how it can be addressed to create a tool to help individuals become more aware of their emotions and promote their psychological wellbeing. It discusses main motivation and background and presents EmoJar, an interactive web application designed and developed to allow users to collect and relive media that have significant impact and remind them of the good things they experience along time in their lives. EmoJar is based on the Happiness Jar concept, enriched here with media and its emotional impact, as an extension to Media4WellBeing, aligning with the goals and approaches of Positive Psychology and Positive Computing. User evaluation results were very encouraging in terms of perceived usefulness, usability and user experience. Future work will lead us further in the aim to provide a useful and interesting digital experience that further supports users in their journey of personal awareness and development.

Short Papers
Paper Nr: 8
Title:

User-centered Approach to Developing Solutions for Electronic Medical Records: Extending EMR Data Entry

Authors:

Viktor Mikhael M. Dela Cruz, Christian E. Pulmano and Ma. Regina Justina E. Estuar

Abstract: The rapid advancement of technology presents the opportunity to digitize practice management. With a doctor to patient ratio of 1:33,000, digitizing health records in the Philippines is seen as one solution in providing more efficient health care services. With the deployment of EMRs in the Philippines at its infancy, there is a need to initiate studies on feasibility, usability and user perception. This paper reports findings on usability of EMRs in a developing economy. Specifically, a system usability scale (SUS) was used to assess the usability of an EMR and interviews were conducted to acquire user feedback. Results of the survey indicated an overall mean SUS score of 70.76 with age and confidence in technology being key deciding factors. Further observations and future research to streamline the heavy task of encoding on an EMR during patient-physician consultation are explained.

Paper Nr: 29
Title:

Stuck in Limbo with Magical Solutions: The Testers’ Lived Experiences of Tools and Automation

Authors:

Isabel Evans, Chris Porter, Mark Micallef and Julian Harty

Abstract: The automation of people’s roles at work brings changes to their lives and work, bringing advantages of increased effectiveness and efficiency, yet potentially life-changing effects, including redundancy. The software industry’s purpose is to automate people’s tasks and activities, and this applies also to jobs within the software industry, including teams who specialise in testing software. Test automation projects are not always successful, and our research initially set out to discover whether the challenges were usability-related, and whether HCI methods could help improve tools. We discovered a much richer story, which told of emotional stresses and life experiences within the software testing community. We discuss how automation, with all its benefits, affects motivation, causing disassociation of testers from their roles, and affecting their job-task mix. We show reasons why software test automation affects testers. Finally, we set out our position for our research about the lived experience of software testers using automation, which we are calling TX: The Testers’ Lived Experiences of Tools and Automation, and argue that the effect of automation and tooling on testers’ lived experience and its effect on their motivation is an area of study worthy of research.

Paper Nr: 34
Title:

Using the Toulmin Model of Argumentation to Explore the Differences in Human and Automated Hiring Decisions

Authors:

Hebah Bubakr and Chris Baber

Abstract: Amazon developed an experimental hiring tool, using AI to review job applicants’ résumés, with the goal of automating the search for the best talent. However, the team found that their software was biased against women because the models were trained on résumés submitted to the company for the previous 10 years and most of these were submitted by men, reflecting male dominance in the tech business. As a result, the models learned that males were preferable, and it excluded résumés that could be inferred to come from female applicants. Gender bias was not the only issue. As well rejecting plausible candidates, problems with the data lead the models to recommend unqualified candidates for jobs. To understand the conflict in this, and similar examples, we apply Toulmin model of argumentation. By considering how arguments are constructed by a human and how a contrasting argument might be constructed by AI, we can conduct pre-mortems of potential conflict in system operation.

Paper Nr: 36
Title:

Hints of Uncanny Utterances in a Disrupted Interaction Continuum

Authors:

Daniele Occhiuto and Franca Garzotto

Abstract: Our work explores the relation between users and conversational agents from the HCI perspective in an interaction continuum linking humans and agents together. We highlight the need for a common representation space that we name “shared playground”. In the shared playground users and agents coordinate through the linguistic notions of competence and performance to reach an “agreement” in order to communicate successfully. The human-agent coordination is possible only if both parties share some preliminary knowledge. we argue that natural language understanding alone is not sufficient to achieve a satisfactory conversation. We elicit the need for level(s) of representation in order to engage the user by ascribing human traits of the agent. We clarify the rise of an Uncanny Valley in conversations and propose possible solutions to mitigate its effects. Finally, we present a set of features to quantitatively describe the eeriness in conversations with the hope to temper distant conversational agents and consolidate closer conversational companions.

Paper Nr: 2
Title:

Automatic Detection of Epileptic Spikes in Intracerebral EEG with Convolutional Kernel Density Estimation

Authors:

Ludovic Gardy, Emmanuel J. Barbeau and Christophe Hurter

Abstract: Analyzing the electroencephalographic (EEG) signal of epileptic patients as part of their diagnosis is a very long and tedious operation. The most common technique used by medical teams is to visualize the raw signal in order to find pathological events such as interictal epileptic spikes (IESs) or abnormal oscillations. More and more efforts are being adopted to try to facilitate the work of doctors by automating this process. Our goal was to analyze signal density fields to improve the visualization and automatic detection of pathological events. We transformed the EEG signal into images on which we applied a convolution filter based on a Kernel Density Estimation (KDE). This method that we propose to call CKDE for Convolutional Kernel Density Estimation allowed the emergence of local density fields leading to a better visualization as well as automatic detection of IESs. Future work will be necessary to make this technique more efficient, but preliminary results are very encouraging and show a high performance compared to a visual inspection of the data or some other automatic detection techniques.

Area 2 - Haptic and Multimodal Interaction

Full Papers
Paper Nr: 20
Title:

Investigating the Semantic Perceptual Space of Synthetic Textures on an Ultrasonic based Haptic Tablet

Authors:

Maxime Dariosecq, Patricia Plénacoste, Florent Berthaut, Anis Kaci and Frédéric Giraud

Abstract: This paper aims to investigate the semantic perceptual space of synthetic tactile textures rendered via an ultrasonic based haptic tablet and the parameters influencing this space. Through a closed card sorting task, 30 participants had to explore 32 tactile-only textures and describe each texture using adjectives. A factorial analysis of mixed data was conducted. Results suggest a 2 dimensional space with tactile textures belonging to a continuum of rough to smooth adjectives. Influence of waveform and amplitude is shown to play an important role in perceiving a texture as smooth or rough, and spatial period is a possible modulator of different degrees of roughness or smoothness. Finally, we discuss how these findings can be used by designers on tactile feedback devices.

Short Papers
Paper Nr: 10
Title:

Dynamic Visualization System for Gaze and Dialogue Data

Authors:

Jonathan Kvist, Philip Ekholm, Preethi Vaidyanathan, Reynold Bailey and Cecilia O. Alm

Abstract: We report and review a visualization system capable of displaying gaze and speech data elicited from pairs of subjects interacting in a discussion. We elicit such conversation data in our first experiment, where two participants are given the task of reaching a consensus about questions involving images. We validate the system in a second experiment where the purpose is to see if a person could determine which question had elicited a certain visualization. The visualization system allows users to explore reasoning behavior and participation during multimodal dialogue interactions.

Paper Nr: 33
Title:

Virtual Reality Controller with Directed Haptic Feedback to Increase Immersion

Authors:

Tobias Hermann, Andreas Burkard and Stefan Radicke

Abstract: In the context of this paper we propose the Directed Feedback Controller (DFC). This is a prototype controller, which is able to generate haptic feedback from all directions. Its purpose is to increase the immersion in Virtual Reality (VR) games. A user study has shown that the current prototype is perceived as very innovative. The participants enjoyed the experience and would tell their friends about it. In addition, most of the respondents see great potential in the idea behind the DFC. However, the DFC still has some minor issues. For example, due to the high weight of the DFC, the participants could not always determine the exact direction of the impact. Therefore, several ideas for weight reduction are proposed at the end of this paper.

Area 3 - Interaction Techniques and Devices

Full Papers
Paper Nr: 15
Title:

Controlling Image-Stylization Techniques using Eye Tracking

Authors:

Maximilian Söchting and Matthias Trapp

Abstract: With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The conscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.

Paper Nr: 28
Title:

Modelling Movement Time for Haptic-enabled Virtual Assembly

Authors:

Samir Garbaya and Vincent Hugel

Abstract: Mechanical assembly consists of joining two or more components together. Manual assembly tasks include different activities to obtain functional products. In order to estimate the assembly cost and elaborate the assembly plan for a product, it is important to measure the duration of the assembly operations. The research reported in this paper aims at investigating if Fitts’ law, which has been widely adopted in numerous research areas including kinematics, human factors and human-computer interaction, can be adopted as a model to estimate the movement time in assembling parts in virtual assembly environment with haptic feedback. The results reported in this paper showed that Fitts’ law can be applied for modelling the movement time in assembling cylindrical parts. However, the analysis of the experimental data showed that when changing the diameter of the moved part, this could have an effect on the movement time. This is promising for the formulation of an inverted Fitts’ law for cylindrical parts’ assembly.

Short Papers
Paper Nr: 4
Title:

Hand Gesture Recognition based on Near-infrared Sensing Wristband

Authors:

Andualem T. Maereg, Yang Lou, Emanuele L. Secco and Raymond King

Abstract: Wrist-worn gesture sensing systems can be used as a seamless interface for AR/VR interactions and control of various devices. In this paper, we present a low-cost gesture sensing system that utilizes near Infrared Emitters (600 - 1100 nm) and Photo-Receivers encompassing the wrist to infer hand gestures. The proposed system consists of a wristband comprising Infrared emitters and receivers, data acquisition hardware, data post-processing software, and gesture classification algorithms. During the data acquisition process, 24 near Infrared Emitters are sequentially switched on around the wrist, and twelve Photo-diodes measure the light reflected, refracted, and scattered by the tissues inside the wrist. The acquired data corresponding to different gestures are labeled and input into a machine learning algorithm for gesture classification. To demonstrated the accuracy and speed of the proposed system, real-time gesture sensing user studies were conducted. As a result of this comparison, we obtained an average accuracy of 98.06% with standard deviation of 1.82%. In addition, we evaluated that the system can perform six-eight gestures per second in real time using a desktop computer operating with Core i7-7800X CPU at 3.5GHz and 32 GB RAM.

Paper Nr: 6
Title:

Effect of User Roles on the Process of Collaborative 2D Level Design on Large, High-resolution Displays

Authors:

Anton Sigitov, André Hinkenjann and Oliver Staadt

Abstract: This paper presents groupware to study group behavior while conducting a creative task on large, high-resolution displays. Moreover, we present the results of a between-subjects study. In the study, 12 groups with two participants each prototyped a 2D level on a 7m x 2.5m large, high-resolution display using tablet-PCs for interaction. Six groups underwent a condition where group members had equal roles and interaction possibilities. Another six groups worked in a condition where group members had different roles: level designer and 2D artist. The results revealed that in the different roles condition, the participants worked significantly more tightly and created more assets. We could also detect some shortcomings for that configuration. We discuss the gained insights regarding system configuration, groupware interfaces, and groups behavior.

Paper Nr: 14
Title:

A Study on the Role of Feedback and Interface Modalities for Natural Interaction in Virtual Reality Environments

Authors:

Chiara Bassano, Manuela Chessa and Fabio Solari

Abstract: This paper investigates how people interact in immersive virtual reality environments, during selection and manipulation tasks in different conditions. We take into consideration two different task complexities, two interaction modalities (i.e. HTC Vive Controller and Leap Motion) and three feedback provided to the user (i.e. none, audio and visual) with the aim of understanding their influence on performances and preferences. Although adding feedback to the touchless interface may help users to overcome instability problems, providing information about the objects state, i.e. grabbed or released, they do not substantially improve performances. Moreover, both touchful and touchless modalities have been shown to be effective for interaction. The analysis presented in this paper may play a role in the design of natural and ecological interfaces, especially in the case non-invasive devices are needed.

Paper Nr: 35
Title:

Towards a Virtual Coach for Boccia: Developing a Virtual Augmented Interaction based on a Boccia Simulator

Authors:

Alexandre Calado, Simone Marcutti, Vinícius Silva, Gianni Vercelli, Paulo Novais and Filomena Soares

Abstract: Disability can be a factor that leads to social exclusion. Considering that involvement in society is paramount for a person with disability, participation in sports can be a powerful tool for inclusion. Based on this premise, the authors propose an intelligent virtual coach for Boccia to encourage the practice of this sport on persons with disabilities, while promoting social inclusion and shortening the learning curve for individuals new to the sport by learning about game strategy. The envisioned virtual coach will rely on Artificial Intelligence models, thus requiring the creation of large datasets, namely for ball placement and throwing movement recommendations. To answer these problems, this work is focused on the development of a Boccia simulator. With this simulator, it is possible to generate artificial gameplay images and allow the user to control an avatar with body tracking. Gesture recognition was implemented with a state-machine, thus enabling the player to throw the ball, with customizable physics, by performing one of two different throwing movements. This functionality can allow the recording of data describing the body movement associated with the placement of the ball in a certain position within the virtual court, which is essential for the proposed recommendation system.

Paper Nr: 47
Title:

Eye Gaze Tracking for Detecting Non-verbal Communication in Meeting Environments

Authors:

Naina Dhingra, Christian Hirt, Manuel Angst and Andreas Kunz

Abstract: Non-verbal communication in a team meeting is important to understand the essence of the conversation. Among other gestures, eye gaze shows the focus of interest on a common workspace and can also be used for an interpersonal synchronisation. If this non-verbal information is missing and or cannot be perceived by blind and visually impaired people (BVIP), they would lack important information to get fully immersed in the meeting and may feel alienated in the course of the discussion. Thus, this paper proposes an automatic system to track where a sighted person is gazing at. We use the open source software ’OpenFace’ and develop it as an eye tracker by using a support vector regressor to make it work similarly to commercially available expensive eye trackers. We calibrate OpenFace using a desktop screen with a 2×3 box matrix and conduct a user study with 28 users on a big screen (161.7 cm x 99.8 cm x 11.5 cm) with a 1×5 box matrix. In this user study, we compare the results of our developed algorithm for OpenFace to an SMI RED 250 eye tracker. The results showed that our work achieved an overall relative accuracy of 58.54%.

Paper Nr: 13
Title:

Assessing the Usability of Different Virtual Reality Systems for Firefighter Training

Authors:

Fabrizio Corelli, Edoardo Battegazzorre, Francesco Strada, Andrea Bottino and Gian Paolo Cimellaro

Abstract: The use of Virtual Reality (VR) based learning environments for training firefighters is becoming more and more common. The key advantages of these approaches is that they allow the development of experiential learning environments, where trainees can be involved into and interact with complex emergency scenarios, including those that cannot rely for the training on real world systems and environments due to costs or security concerns. Despite that, current VR training systems are still affected by a number of weaknesses, mainly related to usability and to the (limited) sense of presence conveyed by the virtual environment (VE), which can negatively affect the expected learning outcomes. To this end, in order to gain further insight into this problem, this work aims at assessing the usability of a firefighter training application deployed in three VR systems and exploiting serious games in the educational approach. The VR systems under analysis provide different levels of immersion and offer different approaches to manage interaction and locomotion inside the VE. Experimental results, obtained through a user study, show differences among the three systems. In particular, the devices and metaphors used to manage locomotion in VR seem to be the most critical parameters with respect to usability and learners' achievements.

Paper Nr: 17
Title:

A Japanese Bimanual Flick Keyboard for Tablets That Improves Display Space Efficiency

Authors:

Yuya Nakamura and Hiroshi Hosobe

Abstract: Tablets, as well as smartphones and personal computers, are popular as Internet clients. Tablet users often use QWERTY software keyboards to enter text. Such a software keyboard usually uses large display space, and requires its user to largely move their fingers. This paper proposes a Japanese bimanual flick keyboard for tablets that improves display space efficiency by using 10 character keys. The paper presents an implementation of the keyboard for an Android tablet, and describes an experiment on its performance compared with a QWERTY software keyboard. Since the results of a preliminary experiment indicated a problem with the key layout, the main experiment further introduced an L-shaped layout and a Γ-shaped layout for comparison. The main experiment examined the keyboard’s input speed, accuracy, and subjective evaluation, and the results showed trade-offs among these layouts.

Paper Nr: 22
Title:

Experiences in Designing HCI Studies for Real-time Interaction across Distributed Crowds and Co-located Participants

Authors:

Franco Curmi and Conrad Attard

Abstract: This paper is a post-hoc reflective case study from the point of view of the research investigators. The authors share the experience of designing and deploying four studies that involve real-time interaction between distributed crowds and co-located participants. We first recount the challenges that these uncommon, yet increasingly necessary, HCI research contexts afford. We then present the learning outcomes from 1) the ‘designing’, 2) the setting up, 3) the real-time dynamics and 4) the interaction between distributed and co-located participants. From this we deduce the impact for the four stakeholders in these contexts 1) the distributed crowd, 2) the co-located participants, 3) the system owners and 4) the researchers. This meta-research approach is motivated by our struggle to find more ‘Researcher-experience’ cases during the early stages of the studies. This contribution in experience sharing is intended to help HCI researchers who are planning studies in this field.

Paper Nr: 25
Title:

Tangible Interactions with Physicalizations of Personal Experience Data

Authors:

Zann B. Anderson and Michael D. Jones

Abstract: Individuals record large amounts of data about their daily lives, from locations to steps to heart rate. Services allow individuals to review and share this data. We explore physical representations—physicalizations—of data recorded by individuals during personally meaningful trail running activities. Physical interactions may change the way in which individuals recall and share their experiences. We present the results of two interview studies involving physicalizations of trail running data for advanced amateur runners. Our results appear to indicate that physicalization of personal experience data supports reflection and sharing, among other themes, and that physical interaction with the object plays a central role in driving these responses.

Area 4 - Theories, Models and User Evaluation

Full Papers
Paper Nr: 19
Title:

User Time Spent between Persuasiveness and Usability of Social Networking Mobile Applications: Patterns of Influence

Authors:

Mohammed Bedjaoui, Nadia Elouali and Sidi M. Benslimane

Abstract: Using social media is one of the most common activities for mobile users. Moreover, it is a time-consuming activity that can lead to addiction. Some gaps in HCI (Human Computer Interaction) ergonomics theory gave rise to this addiction. These gaps lie in an overexploitation of the usability and/or persuasion criteria that designers and/or developers use according to their needs when applying influence strategies to affect users’ engagement. Although these strategies are widely applied in online social networks, they are not well identified and their application levels are still lacking. This paper seeks to establish and validate these influence strategies. We proposed five (05) patterns of influence in online social networks that have a significant impact on Users’ Time Spent (UTS) grouping the different usability criteria and persuasion strategies. Then, we conducted a classification study of those criteria / strategies, using Hybrid Card Sort method carried out by fifteen (15) eligible experts. Experts were asked to group those criteria / strategies into a set of patterns based on our predetermined (with the option to create their own patterns). The results analysis validates our five (05) proposed patterns paving the way to outline their application borderline thereafter.

Short Papers
Paper Nr: 46
Title:

Scene Understanding and 3D Imagination: A Comparison between Machine Learning and Human Cognition

Authors:

Michael Schoosleitner and Torsten Ullrich

Abstract: Spatial perception and three-dimensional imagination are important characteristics for many construction tasks in civil engineering. In order to support people in these tasks, worldwide research is being carried out on assistance systems based on machine learning and augmented reality. In this paper, we examine the machine learning component and compare it to human performance. The test scenario is to recognize a partly-assembled model, identify its current status, i.e. the current instruction step, and to return the next step. Thus, we created a database of 2D images containing the complete set of instruction steps of the corresponding 3D model. Afterwards, we trained the deep neural network RotationNet with these images. Usually, the machine learning approaches are compared to each other; our contribution evaluates the machine learning results with human performance tested in a survey: in a clean-room setting the survey and RotationNet results are comparable and neither is significantly better. The real-world results show that the machine learning approaches need further improvements.

Paper Nr: 16
Title:

How Auditory Information Presentation Timings Affect Memory When Watching Omnidirectional Movie with Audio Guide

Authors:

Rinki Hirabayashi, Motoki Shino, Katsuko N. T. and Muneo Kitajima

Abstract: This study focuses on audio guide as a support for smooth information acquisition for visual stimuli. The interval between provision timing of visual guidance part, which explains explicit features of the object, and information addition part, which explains implicit features of the object, is set as a parameter and its effect on memory is measured as an indicator for estimating the degree of smoothness in information acquisition. Eye tracking experiments were conducted in a dome theater with the omnidirectional movie using three timing interval conditions: shorter than two seconds (Short Interval), longer than three seconds and shorter than five seconds (Medium Interval), and longer than six seconds (Long Interval). The results showed that the memory scores for the movie presented in the Medium Interval condition was the largest. This paper discusses how the presentation in the Medium Interval condition allowed effective integration of visual information and the auditory information provided by audio guide: the visual guidance part of audio guide helped the viewer to find the objects at the best timing before the presentation of information addition part. This would have enabled the participants to elaborate the visual scene with the relevant long-term memory for integration with the auditory information.