| |
Last updated on November 16, 2024. This conference program is tentative and subject to change
Technical Program for Sunday November 17, 2024
|
SuAT1 |
151 Crellin |
(Hybrid) Late Breaking Abstract Session |
Lecture session |
|
14:45-14:55, Paper SuAT1.1 | |
EEG Signal Analysis of Attention Levels in VR and AR Learning Tasks |
|
Gheorghică Istrate, David | Informatics Association for The Future |
Balica, Darius-Cristian | Informatics Association for The Future |
Bucur, Andrei | Informatics Association for The Future |
Beu, Mihai-Robert | Informatics Association for the Future |
Durduman - Burtescu, Tudor | Informatics Association for the Future |
Ripiciuc, Amalia-Ioana | Informatics Association for the Future |
Keywords: Augmented Reality (AR) Applications, Virtual Reality (VR) Integration, Cognitive Modeling
Abstract: The rapid advancement of VR and AR technologies tied with the fact that immersive technologies have become increasingly useful for productivity has opened the gate for a new type of learning where students can learn in a 3-dimensional environment. While both technologies have demonstrated potential in education, their impact on cognitive processes, especially concentration, requires further investigation. This research investigates the concentration difference during learning with Virtual Reality (VR) and Augmented Reality (AR). We quantified the focus on learning, using electroencephalographic (EEG) signals gathered from participants while completing several cognitive tasks to determine how these technologies can influence their attention levels. This study aims to clarify which educational setup (virtual and augmented reality) increases their cognitive attention by measuring their concentration index for preadolescents and adolescents.
|
|
14:55-15:05, Paper SuAT1.2 | |
NeuroRealityTM : A Data Distribution Service-Based Inter-Process Communication Middleware |
|
Asjad, Syed Mohammad | Case Western Reserve University |
Harber, Evan | UCLA |
Santos, Veronica J. | University of California, Los Angeles |
Tyler, Dustin | Case Western Reserve University |
Keywords: Low Latency Networking, Real-Time Communication, Scalability Solutions for Telepresence Systems
Abstract: Distributed real-time systems utilized for telepresence and teleoperation require low-latency, high reliability, and high throughput for communication between processes. In this paper we introduce NeuroRealityTM, a Data Distribution Service publish-subscribe based middleware which enables reliable, robust and fast communication across such processes. This middleware is engineered to seamlessly integrate with other DDS based frameworks such as ROS2 in a native manner, enabling easy scaling of research. We demonstrate the potential of our work by comparing its performance against server-client based methods of communication, and showcase its capabilities for both local and long-distance communication.
|
|
15:05-15:15, Paper SuAT1.3 | |
Late Breaking Work: An Investigation into Autonomy Levels across Telerobotic Platforms with and without Manipulators |
|
Vidyadharan, Akash | Saint Louis University |
Tennison, Jennifer | Saint Louis University |
Gorlewicz, Jenna L. | Saint Louis University |
Keywords: Human-in-the-Loop Control, User Experience (UX) in HCI
Abstract: Telerobots, which extend one's presence to remote settings, have evolved from "video conferences on wheels" to platforms capable of rich social interactions. Yet, research continues to show that enhancing social connection is challenging in telerobot mediated communication. This work investigates if and how autonomy levels and major features (such as having manipulators) may impact user performance and experience in telerobot-mediated social behaviors. Three control schemes - manual, semi-autonomous, and autonomous - were developed for two different telerobot platforms: Temi and Quori. Unlike Temi, where interaction is primarily limited to video and audio streaming, Quori has dual manipulators onboard. Thus, a computer vision-based virtual joystick was also developed to support the video and audio streaming capabilities on Quori and enable manipulator control. Through a within-subjects human user study (N=30), this paper investigates the impact of the 3 control modes and the two different robot form factors on user's performance and experience in navigation and social interaction tasks. Initial results suggest that higher autonomy levels improve user performance and experience, regardless of robot platform. Further, initial findings demonstrate that usability is significantly impacted when additional complexity, such as controlling manipulators, is required of pilot users. This work is ongoing and supports future investigations in balancing autonomy, usability, and enhanced interaction capability for promoting social presence with telerobots.
|
|
15:15-15:25, Paper SuAT1.4 | |
A Multi-Person Real-Virtual Fusion Based Telepresence System |
|
Zhou, Xilong | Lenovo (Beijing) Co., Ltd |
Shi, Shizhou | Lenovo (Beijing) Co., Ltd |
Zhang, Liuxin | Lenovo (Beijing) Co., Ltd |
Jia, Chuanmin | Peking University |
Ma, Siwei | Peking University |
Wang, Qianying | Lenovo (Beijing) Co., Ltd |
Keywords: 3D Tele-immersion, Real-Time Communication, Virtual Meeting Spaces
Abstract: In this paper, we introduce a novel immersive video conferencing system capable of real-time, multi-person viewing, smooth, high-fidelity, life-sized auto-stereoscopic display. Compared to the immersive conferencing system we previously proposed, the system established in this paper supports multi-user viewing without the need for additional eye-tracking devices. We employ an adaptive low-complexity view synthesis method based on an accelerated Real-time Intermediate Flow Estimation (RIFE) model for multi-view generation, followed by real-time light field encoding to achieve realistic 3D rendering. To meet the needs of conferencing scenarios, our system also integrates real-virtual fusion functionality.
|
|
15:15-15:25, Paper SuAT1.5 | |
TeleAware Robot: Designing Awareness-Augmented Telepresence Robot for Remote Collaborative Locomotion |
|
Gong, Jiangtao | Tsinghua University |
Li, Ruyi | Tsinghua University |
Wang, Qianying | Lenovo (Beijing) Co., Ltd |
Keywords: Telepresence Robots, User Experience (UX) in HCI, Augmented Reality Interfaces
Abstract: Telepresence robots enable remote navigation and shared experiences, but lack sufficient environmental and partner awareness. We introduce an awareness framework for collaborative locomotion between onsite and remote users. Based on an observational study, we developed TeleAware robot with awareness-enhancing techniques. A controlled experiment showed TeleAware robot reduced workload, improved social proximity, mutual awareness, and social presence compared to standard robots. We discuss mobility impact, user roles, and future design insights for awareness-enhancing telepresence systems facilitating collaborative locomotion.
|
|
SuBT1 |
151 Crellin |
Online Session 2: Autonomy and Intelligent Systems |
Lecture session |
|
15:30-15:40, Paper SuBT1.1 | |
Toward Human-Robot Teaming for Robot Navigation Using Shared Control, Digital Twin, and Self-Supervised Traversability Prediction |
|
La, Trung Kien | Frankfurt University of Applied Sciences |
Guiffo Kaigom, Eric | Frankfurt University of Applied Sciences |
Keywords: Autonomous Navigation, Teleoperation Systems, User Interface (UI) Design
Abstract: Collision prediction and avoidance are essential for the autonomous navigation of mobile robots. The prediction of a traversability map is a way to achieve this goal. However, this approach might fail if the prediction model is exposed to novel semantic classes unseen during self-supervised training or the environment is subject to highly dynamic motions of living and artificial entities. Included are pedestrians, animals, and vehicles that the robot can hardly handle. In this paper, we embrace this challenge by describing the development of a flexible human-robot teaming that leverages the shared control of the robot to accommodate critical situations and allow the human operator to be otherwise engaged. The operator can remotely perceive the surrounding and actively guide the robot as well as participatively share or leave full control to the robot that autonomously moves toward a desired common goal. A situation-aware control signal balances inputs from the motion planner of the autonomous robot and cognitive inputs from the remote operator issued by using affordable interfaces. The human-in the-loop control is realized through a bidirectional wireless communication between the physical robot and its digital twin. The human-robot teaming enhances the likelihood of a safe and successful navigation under dynamic obstacles and enables a navigation without pre-defined goals. The conceptual architecture is introduced and preliminary results are shared.
|
|
15:50-16:00, Paper SuBT1.3 | |
Remote Telepresence Over Large Distances Via Robot Avatars: Case Studies |
|
Elobaid, Mohamed | Italian Institute of Technology |
Dafarra, Stefano | Istituto Italiano Di Tecnologia |
Romualdi, Giulio | Italian Institute of Technology |
Ranjbari, Ehsan | Italian Institute of Technology |
Chaki, Tomohiro | Honda R&D Co., Ltd |
Kawakami, Tomohiro | Honda R&D |
Yoshiike, Takahide | Honda R&D Co., Ltd |
Pucci, Daniele | Italian Institute of Technology |
Keywords: Teleoperation Systems, Long-Distance Robotic Control, Telepresence Robots
Abstract: This paper discusses the necessary considerations and adjustments that allow a recently proposed avatar system architecture to be used with different robotic avatar morphologies (both wheeled and legged robots with various types of hands and kinematic structures) for the purpose of enabling remote (intercontinental) telepresence under communication bandwidth restrictions. The case studies reported involve robots using both position and torque control modes, independently of their software middleware
|
|
16:00-16:10, Paper SuBT1.4 | |
Integrating Flexible Wearable Sensors and Embodied Intelligence: Reach the Unreachable |
|
Xing, Zhidan | Xidian University |
Wang, Fei-Long | Paris Saclay University |
Qi, Wen | South China University of Technology |
Su, Hang | Politecnico Di Milano |
Keywords: Teleoperation Systems, Sensor Fusion for Robotics, Multi-Modal Interaction
Abstract: In the realm of robotic teleoperation, the combination of flexible wearable sensors with embodied intelligence has developed into a novel strategy recently. Their versatility and softness allow these flexible sensors to let robots precisely record complicated human motion data. Robots can actively interact with their environment by including embodied intelligence, therefore replacing human decision-making and adaptation. The efficiency of teleoperation systems is improved by the synergy of embodied intelligence and flexible sensors, the study investigates. We discuss the special qualities of flexible sensors and their applications in improving robotic vision as well as the part embodied intelligence serves in intelligent control and imitation learning. The work also tackles technical issues such non-linear data mapping, sensitivity fluctuations, and energy efficiency, therefore providing ideas for possible fixes and potential fields of research. By means of technological integration, flexible sensors and embodied intelligence have the power to transform teleoperation and allow robots to accomplish previously unimaginable jobs.
|
|
16:10-16:20, Paper SuBT1.5 | |
First Steps of Designing Adaptable User Interfaces to Mediate Human-Robot Interaction |
|
Amiche, Fadma | LAAS-CNRS |
Juillot, Marie | LAAS CNRS |
Vigné, Adrien | LAAS-CNRS |
Brock, Anke M. | Fédération ENAC ISAE-SUPAERO ONERA |
Clodic, Aurélie | Laas - Cnrs |
Keywords: Teleoperation Systems, User Experience (UX) in HCI, User Interface (UI) Design
Abstract: There is a wide diversity of platforms for teleop- erating robots.Every robot, use case, and type of user brings a unique set of expectations. We propose that human-robot interaction should consider the opportunity to use an external device (such as a smartphone) to not only teleoperate but also mediate the interaction with a robot, either in telepresence or in co-presence. In this paper, we present first steps and ideas towards the development of this media. It involves, as first end-users, the members of the robotics department of our laboratory and considers all robotic platforms which are hosted there (assistance robots, terrestrial robots, robots humanoids, quadrupeds, etc.). We describe our user-centred design method- ology, detailing the needs analysis and brainstorming sessions conducted. Following this, we outline the design and prototyping phases, showcasing the iterative development of our interface. Finally, we discuss the results from our evaluations and the implications for future work in creating flexible and inclusive interfaces for diverse user groups.
|
|
16:20-16:30, Paper SuBT1.6 | |
Text-To-Metaverse: Integrating Advanced Text-To-PointCloud Techniques for Enhanced 3D Scene Generation |
|
Elhagry, Ahmed | MBZ University of Artificial Intelligence |
El Saddik, Abdulmotaleb | University of Ottawa |
Keywords: Digital Twins and Simulation, 3D Tele-immersion, Machine Vision and Perception
Abstract: This paper presents a novel approach to generating metaverse environments directly from textual descriptions by integrating advanced text-to-pointcloud techniques into the text-to-metaverse pipeline. The proposed system replaces conventional text understanding and design script components with a more efficient and accurate pipeline, enhancing the overall generation process. Our methodology leverages natural language processing for entity and relation extraction, which are then converted into structured scene descriptions. These descriptions guide a generative shape engine to produce 3D objects and scenes, which are subsequently rendered into immersive metaverse environments. Experimental evaluations demonstrate significant improvements in both efficiency and quality of the generated environments compared to the baseline model. The integration of text-to-pointcloud techniques ensures a higher fidelity in object representation and scene coherence, addressing limitations in existing metaverse generation methods. This work paves the way for more interactive and dynamic virtual environments, offering substantial advancements for applications in gaming, virtual reality, and remote collaboration.
|
|
16:20-16:30, Paper SuBT1.7 | |
Mixed Reality Based Robot Teleopeation with Haptic Guidance |
|
Raj, Subin | IISc-Bangalore |
Sinha, Yashaswi | Indian Institute of Science, Bengaluru |
Biswas, Pradipta | Indian Institute of Science |
Keywords: Augmented Reality (AR) Applications, Multi-Modal Interaction, Haptic Interfaces for Telerobotics
Abstract: The current generation of Industry 4.0 emphasizes human robot cooperation to perform the complex tasks. When they work together, the task completion depends on the effectiveness of the communication especially while they cooperate remotely. In this paper, we propose a Mixed Reality (MR) based information transfer method for Human-Robot Interaction (HRI) to assist users in remotely performing tasks. The effective path information gave to the user through haptic feedback, generated using the Artificial Potential Field (APF) method, aids users in controlling robots without collisions. We develop multimodal user interface with MR environment to enhance the communication between remote and local side. This seamless communication is facilitated through multiple modalities such as haptic, touch, speech, visual, and text. We analyze the system with user studies. The result reveals that the participants can be able to successfully complete the task using the system and preferred proposed method compared to hand based robot teleoperation. In addition to that, we found that the user favored audio, hologram, haptic based information exchange more than text base information exchange. The analysis of the different type of haptic feedback revealed that the user preferred dynamic haptic feedback.
|
|
SuCT2 |
Linus Pauling Lecture Hall |
Session 6: Mixed Reality |
Lecture session |
Chair: Sahin, Ferat | Rochester Institute of Technology |
|
14:45-15:00, Paper SuCT2.1 | |
Improving Immersive Telepresence Locomotion by Using a Virtual Environment As an Interface to a Physical Environment (VEIPE) |
|
Laukka, Eetu | University of Oulu |
Center, Evan G. | University of Oulu |
Sakcak, Basak | University of Oulu |
LaValle, Steven M. | University of Illinois at Urbana-Champaign |
Ojala, Timo | University of Oulu |
Pouke, Matti | University of Oulu |
Keywords: Virtual Reality (VR) Integration, 3D Tele-immersion, Telepresence Robots
Abstract: Immersive mobile robotic telepresence enables humans to feel present in a remote environment. These systems often use 360-degree panoramic cameras to stream video over a network to a head-mounted display (HMD) where the video feed is rendered to the user. This enables the user to freely look around in a remote environment. A drawback of using highly immersive technologies instead of a more traditional computer screen is that users often experience virtual reality (VR) sickness. Therefore, sometimes the users are only able to use these systems for brief durations. Moreover, the increase in bandwidth requirements of panoramic cameras and the time necessary to process the 360-degree panoramic view contributes to an often unacceptable amount of latency between the user's actions and the observed reaction of the mobile robot, which can be referred to as perception-actuation loop. We present a novel method to mitigate these problems in immersive mobile robotic telepresence systems. We call this method virtual environment as an interface to a physical environment (VEIPE). In VEIPE, a digital twin of the remote environment is used to interface with the telepresence robot in the real remote environment. We present a study comparing teleportation through VEIPE as a locomotion method against a more traditional joystick-based continuous locomotion method for controlling a telepresence robot. Our results indicate that VEIPE induces less VR sickness compared to the joystick condition as measured by the simulator sickness questionnaire (SSQ) and users perform about 31 percent better in a simple navigation task. Furthermore, the users subjectively prefer teleportation through VEIPE over the joystick. We also present exploratory data about cognitive load measured with the NASA task-load-index (NASA-TLX) questionnaire, presence measured with the Slater-Usoh-Steed (SUS) questionnaire, and accumulated yaw in the navigation tasks.
|
|
15:15-15:30, Paper SuCT2.3 | |
Using Augmented Reality to Enhance Worker Situational Awareness in Human Robot Interaction |
|
Sahin, Melis | Case Western Reserve University |
Subramanian, Karthik | Rochester Institute of Technology |
Sahin, Ferat | Rochester Institute of Technology |
Keywords: Augmented Reality (AR) Applications, Virtual Reality (VR) Integration, Robotics and Automation
Abstract: This study investigates the potential of augmented reality (AR) to enhance users' ability to predict the position of a robotic tool when it enters their blind spot. Augmented reality is increasingly utilized in industrial settings to improve situational awareness and user interfaces. In this experiment, participants performed tasks involving the prediction of the tool's position using both conventional methods and AR displays. The Situational Awareness Global Assessment Test (SAGAT) was employed to evaluate the effectiveness of the AR display as a user interface and its impact on users' awareness. Results reveal improvements in several metrics when using AR, including a reduction in average perception error and an increase in subjective confidence levels. Additionally, the AR display led to a higher percentage of correct responses in predicting the direction the tool of the robot was moving when the worker had no direct line of sight to it. These findings suggest that AR displays have the potential to enhance situational awareness and improve the current state of user interfaces in industrial environments.
|
|
15:45-16:00, Paper SuCT2.5 | |
Using Mixed Reality for Safe Physical Human-Robot Interaction |
|
Subramanian, Karthik | Rochester Institute of Technology |
Arora, Sarthak | Rochester Institute of Technology |
Adamides, Odysseus | Rochester Institute of Technology |
Sahin, Ferat | Rochester Institute of Technology |
Keywords: Digital Twins and Simulation, Augmented Reality (AR) Applications, Robotics and Automation
Abstract: Ensuring safety in shared human-robot workspaces is a critical challenge in modern industrial environments. This paper explores a novel approach that leverages a mixed reality (MR) headset and digital twin technology to enhance human-robot safety. This system integrates real-time data from the physical environment into a digital twin. This enables interactions between both tele-operated and autonomous robots. The digital twin feeds data into a speed and separation monitoring (SSM) algorithm, which dynamically adjusts the speed of the robot or stops it to prevent collisions with human workers. To validate the accuracy and reliability (standard deviation) of the MR-based implementation, the system error is measured against data obtained from a motion capture system. The results demonstrate the effectiveness of using digital twins in conjunction with MR for improved safety in collaborative workspaces. This paper details the methodology, implementation, and evaluation of the system, highlighting its potential impact on the future of human robot interaction (HRI).
|
| |