| |
Last updated on November 16, 2024. This conference program is tentative and subject to change
Technical Program for Saturday November 16, 2024
|
SaAT1 |
Linus Pauling Lecture Hall |
Session 1: Teleoperation Systems and Robots |
Lecture session |
Chair: Takayama, Leila | Hoku Labs and Robust.AI |
|
13:00-13:15, Paper SaAT1.1 | |
ROBOT TELEOPERATIVO: Collaborative Cybernetic Systems for Immersive Remote Teleoperation |
|
Tefera, Yonas | Istituto Italiano Di Tecnologia |
Sarakoglou, Ioannis | Fondazione Istituto Italiano Di Tecnologia |
Deore, Siddharth Nimbajirao | Istituto Italiano Di Tecnologia |
Kim, Yaesol | Istituto Italiano Di Tecnologia |
Barasuol, Victor | Istituto Italiano Di Tecnologia |
Villa, Matteo | Italian Institute of Technology |
Anastasi, Sara | Istituto Nazionale Per l'Assicurazione Contro Gli Infortuni Sul |
Caldwell, Darwin G. | Istituto Italiano Di Tecnologia |
Tsagarakis, Nikos | Italian Institute of Technology |
Semini, Claudio | Istituto Italiano Di Tecnologia |
Deshpande, Nikhil | University of Nottingham |
Keywords: Teleoperation Systems, 3D Tele-immersion, Human-in-the-Loop Control
Abstract: Remote robotic teleoperation is becoming vital in numerous fields, especially in hazardous environments where human safety is critical. In these scenarios, teleoperated robots are deployed to perform tasks, reducing human exposure to potential dangers. The “Robot Teleoperativo” project aimed to develop a novel, collaborative teleoperation hardware and software system dedicated to operating in hazard-prone environments, reducing risks to people’s safety and well-being. It employed, developed, and integrated advanced technologies in tele-locomotion, tele-manipulation, and remote human-robot interaction. This short paper provides an overview of the latest developments in the project and a preliminary system evaluation. The project has successfully demonstrated a teleoperation system that enables intuitive and immersive tele-locomotion, tele-manipulation, and remote human-robot interaction. The project showcases the potential for enhanced operator control and precision, offering a more natural and effective means of remote interaction in complex and hazardous environments.
|
|
13:15-13:30, Paper SaAT1.2 | |
Null Space Exploration for Enhanced Transparency Dissipation in TDPA-Based Teleoperation with Redundant Manipulators |
|
Bini, Andrea | Scuola Superiore Sant'Anna |
Novelli, Valerio | Scuola Superiore Sant'Anna |
Porcini, Francesco | Scuola Superiore Sant'Anna |
Alessandro, Filippeschi | Scuola Superiore Sant'Anna |
Avizzano, Carlo Alberto | Scuola Superiore Sant'Anna |
Frisoli, Antonio | Scuola Superiore Sant'Anna |
Keywords: Teleoperation Systems, Telepresence Robots, Haptic Feedback Technology
Abstract: In teleoperation, it is fundamental to achieve stability and transparency to accomplish a task successfully. There are several sources of instability, the worst is time delay over the communication channel. To face this instability, Time-Domain Passivity Approch (TDPA) is one of the most studied control techniques: it guarantees stability by passivating the active elements of the system. However, ensuring stability with TDPA largely degrades transparency by introducing artifacts such as position drift and force jittering. In a teleoperation with redundant manipulators, it is possible to noticeably decrease these effects by exploiting the null space motion. Accordingly, the redundant Time-Domain Passivity Approach (rTDPA) guarantees stability using redundancy to dissipate energy through null space motion. However, this method does not exploit efficiently the null space because of dissipation that may lead to configurations with poorly usable null space. This paper presents a new dissipation strategy that aims to maximize efficiently the dissipated power in null space. The method is based on a new index, called Nullability, which measures the capability of the manipulator to move in null space. The proposed dissipation policy ensures stability while maximizing the Nullability index, thus exploiting efficiently the dissipation in the null space. The proposed method, called Nullability-based rTDPA (NrTDPA), is empirically proven to perform noticeably better than the rTDPA with an experimental set-up made by a leader robot and a follower robot contacting a stiff wall in presence of time delay. In particular, the experiments pointed out the higher efficiency of NrTDPA in the null space dissipation with respect the rTDPA, leading to fewer artifacts in the task space: both the position and the force drift errors result diminished of almost an order of magnitude.
|
|
13:30-13:45, Paper SaAT1.3 | |
Toward Space Exploration on Legs: ISS-To-Earth Teleoperation Experiments with a Quadruped Robot |
|
Seidel, Daniel | German Aerospace Center |
Schmidt, Annika | Technical University of Munich |
Luo, Xiaozhou | German Aerospace Center |
Raffin, Antonin | German Aerospace Center (DLR) |
Mayershofer, Luisa | German Aerospace Center |
Ehlert, Tristan | German Aerospace Center |
Calzolari, Davide | Technical University of Munich (TUM), German Aerospace Center (D |
Hermann, Milan | German Aerospace Center |
Gumpert, Thomas | German Aerospace Center |
Loeffl, Florian | German Aerospace Center |
Den Exter, Emiel | European Space Agency (ESA) |
Köpken, Anne | German Aerospace Center (DLR) |
Luz, Rute | European Space Center |
Bauer, Adrian Simon | German Aerospace Center (DLR) |
Batti, Nesrine | German Aerospace Center (DLR) |
Lay, Florian Samuel | German Aerospace Center (DLR) |
Manaparampil, Ajithkumar Narayanan | German Aerospace Center |
Albu-Schäffer, Alin | DLR - German Aerospace Center |
Leidner, Daniel | German Aerospace Center (DLR) |
Schmaus, Peter | German Aerospace Center (DLR) |
Krüger, Thomas | European Space Agency (ESA) |
Lii, Neal Y. | German Aerospace Center (DLR) |
Keywords: Telepresence Robots, Long-Distance Robotic Control, Remote Collaboration
Abstract: In uneven and unknown terrains, a teleoperated legged robot enables the exploration of lunar and planetary surfaces that may be inaccessible to wheeled rovers. In the on-going DLR-ESA Surface Avatar technology demonstration mission, we study and validate different technologies needed to realize the command of a heterogeneous robotic team with different command modalities. In our latest experiments, the crew on-board the International Space Station (ISS) is given a team of different robotic assets to command with Scalable Autonomy, meaning that they may choose to command the robot through direct control, shared control, or delegate tasks with Supervised Autonomy. One of the robots in our robotic team is DLR’s small quadruped, Bert. Equipped with an robotic arm on its back, which also serves as a camera mount, Bert can observe its surroundings and pick up small objects. Due to its serial-elastic joints, the hardware is robust to impacts, which is critical for space deployment. Applying adaptive learning strategies, Bert’s gaits can be (re)trained directly on hardware to adapt to different environment and gravity conditions. The arm mounted on Bert’s back as well as the quadruped itself can be commanded through direct control by the astronauts via a joystick on-board the ISS. Additionally, simple preset task-level commands complement the user interface to ease the astronauts’ workload. This paper presents the ISS experiments of the first legged robot that was telecommanded from space. Through examining different aspects of the telerobotic performance of Bert, as well as ISS crew feedback, we discuss the feasibility of teleoperated walking robots in space exploration.
|
|
13:45-14:00, Paper SaAT1.4 | |
Beyond Bunkspace: Telepresence for Deep Sea Exploration |
|
Crosby, Alison | University of California, Santa Cruz |
Takayama, Leila | Hoku Labs and Robust.AI |
Martin, Eric James | MBARI |
Matsumoto, George | Monterey Bay Aquarium Research Institute |
Katija, Kakani | Monterey Bay Aquarium Research Institute |
Caress, David | Monterey Bay Aquarium Research Institute |
Keywords: Remote Collaboration, User Experience (UX) in HCI, Psychology of Human-Computer Interaction
Abstract: During this multi-year deployment of research vessel (R/V) telepresence (2019-2022), ocean science teams engaged in synchronous working sessions using visual, audio, and textual communication channels over satellite networks while at sea. The system discussed in this paper provides a unique look at telepresence where the stakes are high regarding safety for the people aboard the research vessel, cost per minute of ship time, and opportunity cost of delays in coordination between the scientists, robot operators, and ship's crew. These high stakes are addressed by creating a formal organizational chain of command with clearly defined roles and responsibilities that also played out in the use of telepresence. We report the empirical results of our interviews with R/V telepresence users (N=29) and behavioral observations of shadowing three telepresence sessions. We present our findings as a source of inspiration for hybrid, geographically distributed work teams, including using more formalized telepresence roles with differing levels of access to communications channels.
|
|
14:00-14:15, Paper SaAT1.5 | |
ATMAS: Assistive Teleoperation Method Using Augmented Reality and Switching Control |
|
Saha, Somdeb | Tata Consultancy Services |
Gaonkar, Sahil | Tata Consultancy Services |
Parab, Shubham | Tata Consultancy Services |
Lima, Rolif | Tata Consultancy Services |
Vatsal, Vighnesh | TCS Research, Tata Consultancy Services Ltd |
Vakharia, Vismay | Tata Consultancy Services |
Das, Kaushik | TCS Research |
Keywords: Teleoperation Systems, Augmented Reality (AR) Applications, Robotics and Automation
Abstract: This paper presents an assistive teleoperation method using a combination of Augmented Reality (AR) markers and control switching between the operator and robot. This method is designed to enhance remote manipulation tasks in retail environments like convenience stores. The system integrates a novel intention recognition algorithm for predicting goals, augmented reality markers for visual guidance, and a variable autonomy framework. The system is able to adapt under goal switching, which otherwise fails for existing methods. A user study with 7 participants compared our method against two other teleoperation methods in terms of objective and subjective metrics. Results showed that our method significantly reduced collisions during task execution. The study provides insights into the strengths and limitations of augmented reality assistance and variable autonomy in teleoperation, laying groundwork for future research in enhancing human-robot collaboration for retail automation tasks.
|
|
14:15-14:30, Paper SaAT1.6 | |
A Scalable Human-Robot Telepresence Framework for Autonomous Space Systems |
|
Tumbar, Andrei | Jet Propulsion Laboratory |
Yen, Jeng | NASA Jet Propulsion Laboratory |
Myint, Steven | Jet Propulsion Laboratory |
Hartman, Frank | NASA JPL Caltech |
Kim, Junggon | Jet Propulsion Laboratory |
Dor, Harel | Jet Propulsion Laboratory, California Institute of Technology |
Seto, Brittany | Jet Propulsion Laboratory |
Huang, Justin | Jet Propulsion Laboratory |
Keywords: Teleoperation Systems, Human-in-the-Loop Control, Robotics and Automation
Abstract: This paper discusses our past and present experience with operating Mars surface robotics missions. We review current framework and methodology for the Mars 2020 (Perseverance) rover surface operation as well as outline our current development and methodology to create a framework which can be easily adapted to new types of spacecraft. Finally, we discuss how our test platform, ``Scarecrow", which has been adapted to help us develop and test our new showcase for future space robotics missions.
|
|
SaAT2 |
151 Crellin |
Session 2: UI Design and Human-In-The-Loop Control |
Lecture session |
Chair: Arif, Ahmed Sabbir | University of California, Merced |
|
13:00-13:15, Paper SaAT2.1 | |
Feasibility Study of Finger Interface for Miniature Surgical Instruments |
|
Cha, Serin | Ewha Womans University |
Kim, Sebin | Ewha Womans University |
Ryu, Seok Chang | Ewha Womans University |
Keywords: User Interface (UI) Design, Teleoperation Systems, Ergonomic Design
Abstract: This paper presents a study on the design of a finger interface to improve the teleoperation of a miniaturized surgical instrument with a bendable distal section. A unique design principle employing the kinematic similarity between the human motion and the follower robot has been proposed for such instruments, which may result in embodied teleoperation, enhancing the overall telepresence experience. According to the principle, a design for an index finger interface was proposed, consisting of a sub-unit measuring finger motion and a handle. When the interface is connected to a robotic platform, it can control the entire tool motion in two separate ways: distal bending by the sub-unit and overall tool motion by the handle. A human subject test was conducted for trajectory-following tasks in a virtual environment, and their performance score was quantified for analysis of the learning curve. Based on the satisfactory learning rate of 75.78% for all nine subjects, the proposed finger interface could be a promising interface for a miniature surgical instrument with a long bendable section at the distal end.
|
|
13:15-13:30, Paper SaAT2.2 | |
Augmenting Robot Teleoperation with Shared Autonomy Via Model Predictive Control |
|
Lima, Rolif | Tata Consultancy Services |
Saha, Somdeb | Tata Consultancy Services |
Vakharia, Vismay | Tata Consultancy Services |
Vatsal, Vighnesh | TCS Research, Tata Consultancy Services Ltd |
Das, Kaushik | TCS Research |
Keywords: Human-in-the-Loop Control, Teleoperation Systems, Autonomy and Intelligent Systems
Abstract: Shared autonomy enabled teleoperation systems minimise the cognitive load on an operator by providing autonomous assistance during task execution. In contrast to prior approaches using policy blending methods that employ a predict-then-act principle where the robot takes over when confidence in a goal is high, our proposed approach involves continuous policy adaptation. This approach utilises the augmented state of the robot, incorporating both the operator's inputs as well as the robot's autonomous assistance, to provide final assistive control to the robot. To address the issue of the operator's trust in the robot, we formulate the approach as an optimal control problem with the objective of following the operator's input commands while simultaneously adapting the user's inputs to complete the task. We employ a Model Predictive Control (MPC) framework to solve this problem. We evaluated this framework through a user study on multiple goal picking tasks and compared it against pure teleoperation and proximity-based assistance methods. The results of the study show superior performance of our approach over the other methods in terms of trial completion times, collision avoidance, perceived ease of use, and responsive behaviour, indicating its effectiveness in improving teleoperation performance while maintaining user trust in the system.
|
|
13:30-13:45, Paper SaAT2.3 | |
Handling Constant and Changing Latency with Graphical User Interfaces: A Study with Remote Crane Operators |
|
Sitompul, Taufik Akbar | Norwegian University of Science and Technology |
Fahlén, Theodor | Mälardalen University |
Lindell, Rikard | Mälardalen University |
Keywords: User Interface (UI) Design, User Experience (UX) in HCI, Augmented Reality Interfaces
Abstract: Nowadays, there are newer container cranes that can be operated remotely from a control room. However, the data transmission between operators in the control room and their cranes could introduce high latency that may affect operators' ability to work safely and productively. We evaluated two types of graphical user interfaces (GUIs) that could support operators to remotely control their container cranes in the presence of latency. The first GUI predicted how the crane would move based on the operator's real-time input, while the second GUI visualizes which prior input made by the operator is currently executed by the crane system. We involved seven remote crane operators in two experiments, where the first one compared both GUIs against the condition without additional visual support in constant latency rates and the second one compared both GUIs against the condition without additional visual support in continuously changing latency. Due to the small number of participants, it was not possible to confidently conclude that one GUI was significantly better based on the performance data of the crane operators. However, based on the feedback given by the crane operators, both GUIs were perceived less useful by the crane operators whether the latency rates were constant or continuously changing.
|
|
13:45-14:00, Paper SaAT2.4 | |
Self-Centering 3-DoF Feet Controller for Hands-Free Locomotion Control in Telepresence and Virtual Reality |
|
Memmesheimer, Raphael | University of Bonn |
Lenz, Christian | University of Bonn |
Schwarz, Max | University of Bonn |
Schreiber, Michael | University of Bonn |
Behnke, Sven | University of Bonn |
Keywords: Human-in-the-Loop Control, Telepresence Robots, Teleoperation Systems
Abstract: We present a novel seated feet controller for handling 3 DoF aimed to control locomotion for telepresence robotics and virtual reality environments. Tilting the feet on two axes yields in forward, backward and sideways motion. In addition, a separate rotary joint allows for rotation around the vertical axis. Attached springs on all joints self-center the controller. The HTC Vive tracker is used to translate the trackers' orientation into locomotion commands. The proposed self-centering feet controller was used successfully for the ANA Avatar XPRIZE competition, where a naive operator traversed the robot through a longer distance, surpassing obstacles while solving various interaction and manipulation tasks in between. We publicly provide the models of the mostly 3D-printed feet controller for reproduction.
|
|
14:00-14:15, Paper SaAT2.5 | |
ThumbDriver: Telepresence Robot Control with a Finger-Worn Mouse |
|
Zand, Ghazal | University of California, Merced |
Arif, Ahmed Sabbir | University of California, Merced |
Keywords: User Interface (UI) Design, Teleoperation Systems, Telepresence Robots
Abstract: ThumbDriver enables users to remotely operate a telepresence robot using an off-the-shelf finger-wearable mouse. The system carefully maps typical mouse actions to various robot operations, facilitating smooth and precise control. In a user study, we compared ThumbDriver with a keyboard-based control method. The results showed that ThumbDriver required significantly fewer actions to perform teleoperation tasks, leading to reduced average task completion times. Participants found ThumbDriver to be faster, more precise, and easier to learn and use. All participants expressed a preference for continuing to use ThumbDriver for operating telepresence robots.
|
|
14:15-14:30, Paper SaAT2.6 | |
Beyond the Classroom: A Systematic Review of Revolutionizing Education with Immersive Virtual Reality |
|
Yu, Liangyue | University of Galsgow |
Kizilkaya, Burak | University of Glasgow |
Qi, Liyuan | University of Glasgow |
Ge, Yao | University of Glasgow |
Ansari, Shuja | University of Glasgow |
Olaoluwa, Popoola | University of Galsgow |
Imran, Muhammad Ali | University of Glasgow |
Wasim, Ahmad | University of Galsgow |
Keywords: 3D Tele-immersion, Virtual Reality (VR) Integration, Real-Time Communication
Abstract: Virtual Reality (VR) has garnered significant attention in education, revolutionizing traditional learning environments into immersive and interactive experiences. In this study, we conduct a systematic review to examine advancements, applications, and potential future research directions, providing insights and perspectives on VR-enhanced education. Specifically, we explore the integration of Head-Mounted Displays (HMDs) and real-time 360-degree video streams to create enriched learning environments for immersive and real-time interaction. Additionally, a case study on immersive VR telepresence is conducted to demonstrate the use case of VR-enhanced education and evaluate the quality of the user experience.
|
|
SaBT1 |
Linus Pauling Lecture Hall |
Session 3: Haptic Interfaces |
Lecture session |
Chair: Qi, Wen | South China University of Technology |
|
14:45-15:00, Paper SaBT1.1 | |
FerroVibe: Towards a Modular Tactile Device |
|
Singh, Harsimran | DLR |
Perez Marcilla, Luis | DLR |
Banerjee, Premankur | University of Southern California |
Rothammer, Michael | Hapticlabs |
Hulin, Thomas | German Aerospace Center (DLR) |
Keywords: Haptic Feedback Technology, Wearable Technology
Abstract: This paper presents the proof of concept of the modular nature of ferrofluid based tactile device, FerroVibe. The modularity of FerroVibe allows for customization in degrees of freedom, force feedback, and vibrational cues by strategically positioning magnetic actuators around the central assembly containing a neodymium magnet and ferrofluid. Three distinct configurations of FerroVibe are introduced, each offering unique actuation mechanisms and physical characteristics that could be tailored to different application requirements. The paper showcases the proof of concept and details the working principle of each configuration, highlighting the strengths and limitations of each design. Initial comparative analysis is conducted to evaluate the trade-offs in terms of power consumption, speed, size, weight, and feedback fidelity. The results demonstrate that FerroVibe's potential modular architecture provides a flexible and effective solution for a wide range of tactile feedback applications, making it a promising candidate for future advancements in haptic technology.
|
|
15:00-15:15, Paper SaBT1.2 | |
Three Degree-Of-Freedom Soft Continuum Kinesthetic Haptic Display for Telemanipulation Via Sensory Substitution at the Finger |
|
Su, Jiaji | Case Western Reserve University |
Zuo, Kaiwen | Case Western Reserve University |
Chua, Zonghe | Case Western Reserve University |
Keywords: Haptic Interfaces for Telerobotics, Haptic Feedback Technology, Robotic Actuation Techniques
Abstract: Sensory substitution is an effective approach for displaying stable haptic feedback to a teleoperator under time delay. The finger is highly articulated, and can sense movement and force in many directions, making it a promising location for sensory substitution based on kinesthetic feedback. However, existing finger kinesthetic devices either provide only one-degree-of-freedom feedback, are bulky, or have low force output. Soft pneumatic actuators have high power density, making them suitable for realizing high force kinesthetic feedback in a compact form factor. We present a soft pneumatic handheld kinesthetic feedback device forSensory substitution is an effective approach for displaying stable haptic feedback to a teleoperator under time delay. The finger is highly articulated, and can sense movement and force in many directions, making it a promising location for sensory substitution based on kinesthetic feedback. However, existing finger kinesthetic devices either provide only one-degree-of-freedom feedback, are bulky, or have low force output. Soft pneumatic actuators have high power density, making them suitable for realizing high force kinesthetic feedback in a compact form factor. We present a soft pneumatic handheld kinesthetic feedback device for the index finger that is controlled using a constant curvature kinematic model. It has respective position and force ranges of ±3.18mm and ±1.00N laterally, and ±4.89mm and ±6.01N vertically, indicating its high power density and compactness. The average open-loop radial position and force accuracy of the kinematic model are 0.72mm and 0.34N. Its 3Hz bandwidth makes it suitable for moderate speed haptic interactions in soft environments. We demonstrate the three-dimensional kinesthetic force feedback capability of our device for sensory substitution at the index figure in a virtual telemanipulation scenario.
|
|
15:15-15:30, Paper SaBT1.3 | |
CoaxHaptics-3RRR: A Novel Mechanically Overdetermined Haptic Interaction Device Based on a Spherical Parallel Mechanism |
|
Dinc, Huseyin Tugcan | KAIST |
Hulin, Thomas | German Aerospace Center (DLR) |
Rothammer, Michael | Hapticlabs |
Seong, Hyeonseok | Korea Advanced Institute of Science & Technology (KAIST) |
Willberg, Bertram | German Aerospace Center (DLR) |
Pleintinger, Benedikt | Deutsches Zentrum Für Luft Und Raumfahrt |
Ryu, Jee-Hwan | Korea Advanced Institute of Science and Technology |
Ott, Christian | TU Wien |
Keywords: Haptic Interfaces for Telerobotics, Haptic Feedback Technology
Abstract: This paper presents the CoaxHaptics-3RRR, a novel concept for haptic interaction devices that is based on a 3-RRR spherical parallel mechanism (SPM). The novelty lies in its mechanical overdetermination through a central hollow shaft, which brings two advantages. First, the device can be built with high rigidity with regard to translational degrees of freedom (DoF). Second, it enables a lighter design with less rotational inertia, as the moving links do not need to withstand translational or gravitational forces. In order to fully exploit these advantages, an optimization process has been conducted that simultaneously optimized for workspace, manipulability, inertia, and structural stiffness. The resulting functional demon- strator provides an unlimited workspace around the shaft axis and +/- 55 degrees in the other two rotational DoF, thus covering a large portion of the human wrist’s rotational range. Tests confirm the validity and superiority of the concept over existing devices, making it a promising solution for the category of mechanically overdetermined haptic devices.
|
|
15:30-15:45, Paper SaBT1.4 | |
Teleoperation Control of Humanoid Robot Wheeled Chassis Based on Plantar Pressure Perception |
|
Wang, Fei-Long | Paris Saclay University |
Qi, Wen | South China University of Technology |
Su, Hang | Politecnico Di Milano |
Alfayad, Samer | Paris Saclay University |
Keywords: Teleoperation Systems, Wearable Technology, Sensor Fusion for Robotics
Abstract: To enhance the intuitiveness and naturalness of teleoperation control for a humanoid robot’s wheeled chassis, this paper introduces a teleoperation control system based on plantar pressure-sensing shoes and presents a novel mapping strategy. This strategy captures the user’s movement intentions by utilizing the plantar pressure sensors, enabling remote control of the humanoid robot’s wheeled chassis through shifts in the user’s center of gravity, without requiring physical movement. Consequently, we designed and executed experi- ments to collect and analyze plantar pressure data, followed by polynomial fitting to establish the specific mapping function. The experimental results demonstrate that the system is highly responsive and that the nonlinear model more accurately represents the mapping process.
|
|
SaBT2 |
151 Crellin |
Session 4: BCI/UX in Telepresence |
Lecture session |
Chair: Hu, Yaoping | University of Calgary |
|
14:45-15:00, Paper SaBT2.1 | |
Linking Cognitive Decision-Making with Brain Activity During Haptic Interactions in Virtual Environments |
|
Tarng, Stanley | University of Calgary |
Hu, Yaoping | University of Calgary |
Keywords: Behavioral Modeling, Cognitive Modeling, Psychology of Human-Computer Interaction
Abstract: Haptic feedback is essential for creating immersive and intuitive user experiences in remote and virtual environments (VEs). However, the cognitive processes underlying haptic interactions and their connection to brain activity remain underexplored. This study investigates the relationship between behavioral modeling through the Drift-Diffusion Model (DDM) and brain responses measured by event-related potentials (ERPs), focusing on the P300 and N200 components. A user study was conducted in which users were tasked to trace a helical curve while perceiving force and vibrotactile cues in a 3D stereoscopic VE. Participants wore an EEG cap to record their brain activity, and reaction times (RTs) were recorded for each haptic interaction. The study analyzed the DDM's drift rate parameter, which corresponds to the speed of evidence accumulation, and compared it with the slopes from the cue onset to P300 (P300 slope) and slope from N200 to P300 components (NP300 slope), obtained from ERPs. Results revealed an agreement between DDM drift rates and ERP slopes, particularly with the NP300 slope, suggesting that NP300 slope may more accurately represent the cognitive processes reflected by DDM when perceiving haptic cues. The agreement between behavioral data (RTs) and brain responses (ERPs) suggests that the DDM could be an useful tool for inferring underlying brain activity by analyzing RTs during haptic interactions.
|
|
15:00-15:15, Paper SaBT2.2 | |
Motor Imagery Teleoperation of a Mobile Robot Using a Low-Cost Brain-Computer Interface for Multi-Day Validation |
|
An, Yujin | Caltech |
Mitchell, Daniel | University of Glasgow |
Lathrop, John | Caltech |
Flynn, David | University of Glasgow |
Chung, Soon-Jo | Caltech |
Keywords: Brain-Computer Interfaces (BCI), Accessibility in Telepresence, Human-Centered Machine Learning
Abstract: Brain-computer interfaces (BCI) have the potential to provide transformative control in prosthetics, assistive technologies (wheelchairs), robotics, and human-computer interfaces. While Motor Imagery (MI) offers an intuitive approach to BCI control, its practical implementation is often limited by the requirement for expensive devices, extensive training data, and complex algorithms, leading to user fatigue and reduced accessibility. In this paper, we demonstrate that effective MI-BCI control of a mobile robot in real-world settings can be achieved using a fine-tuned Deep Neural Network (DNN) with a sliding window, eliminating the need for complex feature extraction methods. By employing a low-cost (~3k), 16-channel, non-invasive, open-source electroencephalogram (EEG) device, we enabled four users to teleoperate a quadruped robot over three days. Our approach reduces the required training data by 70%, significantly minimizing user fatigue from lengthy data collection sessions. The system achieved 78% accuracy on a single-day validation dataset and maintained a 75% validation accuracy over three days without extensive retraining from day-to-day. For real-world robot command classification, we achieved an average of 62% accuracy. By providing empirical evidence that MI-BCI systems can maintain performance over multiple days with reduced training data to DNN and a low-cost EEG device, our work enhances the practicality and accessibility of BCI technology. This advancement makes BCI applications more feasible for real-world scenarios, particularly in controlling robotic systems.
|
|
15:15-15:30, Paper SaBT2.3 | |
Mean-Field Representation for EEG Classifications |
|
Roy, Suryatapa | University of Calgary |
Hu, Yaoping | University of Calgary |
Martinuzzi, Robert J. | University of Calgary |
Keywords: Brain-Computer Interfaces (BCI), Cognitive Modeling
Abstract: Transformer-based classifiers (e.g., Conformer) are state-of-the-art for classifying electroencephalographic (EEG) signals. One main drawback of these classifiers is their lack of cross-individual generalization. Hence, we proposed a novel mean-field (MF) representation to remedy this drawback. Being quasi-stationary within certain brain regions over a time period, this representation enabled constrained learning for classifying EEG signals. We implemented a transformer – MFT – by cascading Conformer to the MF representation. Using 6 EEG datasets, we conducted a comparison between MFT and Conformer. This comparison revealed that MFT yielded similar outcomes as Conformer in individual-specific classifications but better performance than Conformer in cross-individual classifications. Importantly, the representation enabled MFT classifications to be robust with reduced overfitting and to endorse cross-individual generalization. Moreover, the classifications were interpretable in terms of the MF representation. Such interpretability may be beneficial for EEG-based brain-machine interfaces to equip telepresence.
|
|
15:30-15:45, Paper SaBT2.4 | |
Design and Evaluation of a Low-Profile Haptic Interface Based on Surface Electrical Nerve Stimulation |
|
Jakes, Rachel S. | Case Western Reserve University |
Mesias, Luis Enrique | Case Western Reserve University |
Santos, Veronica J. | University of California, Los Angeles |
Fu, Michael J. | Case Western Reserve University |
Tyler, Dustin | Case Western Reserve University |
Keywords: Haptic Feedback Technology, Wearable Technology, Haptic Interfaces for Telerobotics
Abstract: Haptic feedback in telepresence applications is vital for remote task performance and object manipulation. To maximize telepresence capabilities, haptic systems need to be integrated with immersive telepresence interfaces such as virtual reality, which offer benefits such as improved spatial awareness and operator mobility. Compatibility with the mobility of these systems and their built-in optical hand tracking require low-profile wearable interfaces. Further, for full functional benefits, haptic systems must be able to elicit spatially congruent sensation without impeding operator movement and physical object interactions. In this paper, we describe the design of a lightweight, compact, and adaptable surface electrical nerve stimulation-based haptic interface. We demonstrate its capacity to consistently generate spatially congruent sensation on the all five fingertips without occlusion, as well as its compatibility with headset-based optical hand tracking.
|
|
SaCT1 |
Linus Pauling Lecture Hall |
Session 5: Infrastructure and Systems |
Lecture session |
Chair: Lii, Neal Y. | German Aerospace Center (DLR) |
|
15:45-16:00, Paper SaCT1.1 | |
Binary Coded Phase Distribution of a Novel Intelligent Reflecting Surface for Beams-Steering |
|
Baki, A. K. M. | Ahsanullah University of Science and Technology (AUST) |
Hasan, Md. Shahriar | Ahsanullah University of Science and Technology |
Keywords: 5G and Advanced Networking, Internet of Things (IoT) Integration, Intelligent Control Systems
Abstract: Though 5G is the latest communication technology, yet there are scopes of improvement. New technologies will be implemented in future 6G systems. Areas of improvement are the cell capacity, coverage area, device density, energy efficiency, spectrum uses, latency, location accuracy, and user experiences. Intelligent Reflecting Surface (IRS) has received significant attention for future 6G applications. IRS has the potential to improve communication performance by reconfiguring the wireless propagation channels. IRS can play a significant role for telepresence systems requiring high-quality, real-time communication. In this paper, we have described a novel enveloped-shaped IRS for multiple beam forming and beam steering of the reflected signal @ 5.8 GHz. We have chosen 5.8 GHz band for its wide range of applications. The reflection phase of the unit-cell of the proposed IRS can be varied from -108° to 133° (total phase variation 241°) through changing the variable capacitance of the IRS. Achievable minimum magnitude of the reflection coefficient is -2.3 dB which is also the minimum value @ 5.8 GHz published so far. Such wide variation of phase and minimum value of magnitude response is helpful for multiple beam forming and beam steering of the reflected signal from the proposed IRS.
|
|
16:00-16:15, Paper SaCT1.2 | |
Conory I and Tilt Mechanism: How the “Tilt” Movement Affects Participants’ Impression of the Telepresence Robot – a Pilot Study* |
|
Zguda, Paulina | Jagiellonian University, Doctoral School in the Humanities |
Mizuuchi, Ikuo | Tokyo University of Agriculture and Technology |
N, Tuxunbate | Tokyo University of Agriculture and Technology |
Hiyoshi, Kenta | Meidensha Corporation |
Keywords: Telepresence Robots, Remote Collaboration, Adaptive User Interfaces
Abstract: This pilot study examines the impact of neck movements in a telepresence robot during remote conversations with 25 participants. Head tilting improved perceived attributes such as Friendliness, Approachability, Responsiveness, Usefulness, and Engagement. Participants reported increased engagement and comfort due to visual changes and neck movements, although some found them slightly unnatural. Varied questions and human-like behaviors enhanced the interaction, suggesting that non-verbal cues can significantly improve human-robot interactions.
|
|
16:15-16:30, Paper SaCT1.3 | |
Investigating a Simple Technique to Alter Viewpoint Height in Immersive 360° Video |
|
Center, Evan G. | University of Oulu |
Widagdo, Prabancoro Adhi Catur | National Taiwan University of Science and Technology |
Pouke, Matti | University of Oulu |
Suomalainen, Markku | VTT Technical Research Centre of Finland |
Mimnaugh, Katherine J. | University of Oulu |
Aboud, Abdulsatar Muhsin | Optomed |
Ojala, Timo | University of Oulu |
LaValle, Steven M. | University of Illinois at Urbana-Champaign |
Keywords: Virtual Reality (VR) Integration, 3D Tele-immersion, Advanced Image and Video Processing
Abstract: A user's perceived height can have a significant impact on their experience in an immersive telepresence environment. However, virtually manipulating the user's height (if physical adjustment of the camera is not possible) introduces distortions which may counteract positive effects caused by an adjusted height. In a user study of 68 participants, we implemented a simple method for virtually adjusting a user's height in an immersive telepresence meeting which was pre-recorded via a 360 degree camera to observe the trade-off between the height shift and its ensuing distortions. The shifted-height condition was created via software by changing the position of the virtual camera within the 3-D projection sphere, a simple technique which introduces mild visual distortions. Participants were asked to attend two meetings in immersive telepresence, at normal and increased heights. Our results indicate that while participants were able to detect the visual distortions at an above chance rate, these distortions had little influence on the participants' preferences between conditions, supporting this technique as a viable method of virtually altering a user's height in immersive telepresence environments.
|
|
16:30-16:45, Paper SaCT1.4 | |
SENTINEL: Scalable Edge Network for Telepresence and Integrated AI |
|
Aly, Mohamed | CalPoly Pomona |
Ung, James | California State Polytechnic University, Pomona |
Kim, Hyung Jin | California State Polytechnic University, Pomona |
He, Yutao | UCLA |
Cheng, Benny | NSWC Corona |
Hwu, Wen-mei | UIUC |
Keywords: Edge Computing in Telepresence, Natural Language Processing, (NLP) for Human Learning
Abstract: This paper presents a research platform centered on an RC car with a hybrid system utilizing TurningPi and DeskPi6C, with multiple Raspberry Pi Compute Module 4 nodes for edge computing. The plat- form is designed to facilitate telepresence applications by integrating advanced computer vision (YOLO) and natural language processing (NLP) models (BERT), managed by a backend server. The system’s NLP module achieves com- mand processing latencies ranging from 1000 to 3000 μs in ideal scenarios, enabling near-instantaneous responses in telepresence scenarios. The YOLOv5s model, deployed on the edge, supports real-time visual feedback with an impressive inference speed of approximately 10 FPS. The system’s scalability, demonstrated through Kubernetes, ensures that YOLO inference speed significantly improves as more pods are added, up to the platform’s 10-node capacity, thereby highlighting its potential for handling the increased workload. The system’s efficiency and reliability are underscored by its power consumption and resource utilization metrics, which are critical for sustained telep- resence operations. The central server maintains stable power usage at around 57.8 W, while the DeskPi Super 6C and Turing Pi nodes peak at 36.0 W and 20.7 W, respectively, under load. This ensures robust performance in remote environments, reassuring the audience of its suitability for sustained telepresence operations. This plat- form showcases the potential of integrating Kubernetes with edge computing to enhance telepresence applications, offering real-time AI processing and scalable operations. The research provides valuable insights into developing versatile, efficient systems for telepresence, enabling more prosperous and responsive remote interactions
|
|
SaCT2 |
151 Crellin |
Online Session 1: HCI |
Lecture session |
|
15:45-16:00, Paper SaCT2.1 | |
SharpSLAM: 3D Object-Oriented Visual SLAM with Deblurring for Agile Drones |
|
Tsetserukou, Dzmitry | Skoltech |
Fedoseev, Aleksey | Skolkovo Institute of Science and Technology |
Davletshin, Denis | Skolkovo Institute of Science and Technology |
Zhura, Iana | Skolkovo Institute of Science and Technology |
Cheremnykh, Vladislav | Skoltech |
Rybiyanov, Mikhail | Skolkovo Institute of Science and Technology |
Keywords: Advanced Image and Video Processing, Machine Vision and Perception, Autonomy and Intelligent Systems
Abstract: The paper focuses on the algorithm for improving the quality of 3D reconstruction and segmentation in DSP- SLAM by enhancing the RGB image quality. SharpSLAM algorithm developed by us aims to decrease the influence of high dynamic motion on visual object-oriented SLAM through image deblurring, improving all aspects of object-oriented SLAM, including localization, mapping, and object reconstruction. The experimental results revealed noticeable improvement in object detection quality, with F-score increased from 82.9% to 86.2% due to the higher number of features and corresponding map points. The RMSE of signed distance function has also decreased from 17.2 to 15.4 cm. Furthermore, our solution has enhanced object positioning, with an increase in the IoU from 74.5% to 75.7%. SharpSLAM algorithm has the potential to highly improve the quality of 3D reconstruction and segmentation in DSP-SLAM and to impact a wide range of fields, including robotics, autonomous vehicles, and augmented reality.
|
|
16:00-16:15, Paper SaCT2.2 | |
Automatic Speech-Based Charisma Recognition and the Impact of Integrating Auxiliary Characteristics |
|
Kathan, Alexander | University of Augsburg |
Amiriparian, Shahin | Technical University of Munich |
Christ, Lukas | University of Augsburg |
Eulitz, Simone | Ludwig Maximilian University of Munich |
Schuller, Björn | Imperial College London |
Keywords: Behavioral Modeling, AI-driven Personalization, Human-Centered Machine Learning
Abstract: Automatic recognition of speaker's states and traits is crucial to facilitate a more naturalistic human-AI interaction - a key focus in human-computer interaction to enhance user experience. One particularly important trait in daily life is charisma. To date, its definition is still controversial. However, it seems that there are characteristics in speech that the majority perceives as charismatic. To this end, we address the novel speech-based task of charisma recognition in a three-fold approach. First, we predict charismatic speech using both interpretable acoustic features and embeddings of two audio Transformers. Afterwards, we make use of auxiliary labels that are highly correlated with charisma, including enthusiastic, likeable, attractive, warm, and leader-like, to check their impact on charisma recognition. Finally, we personalise the best model, taking individual speech characteristics into account. In our experiments, we demonstrate that the charisma prediction model benefits from integrating auxiliary characteristics as well as from the personalised approach, resulting in a best Pearson's correlation coefficient of 0.4304.
|
|
16:15-16:30, Paper SaCT2.3 | |
TiltXter: CNN-Based Electro-Tactile Rendering of Tilt Angle for Telemanipulation of Pasteur Pipettes |
|
Altamirano Cabrera, Miguel | Skolkovo Institute of Science and Technology Skoltech |
Tirado, Jonathan | Skolkovo Institute of Science and Technology Skoltech |
Fedoseev, Aleksey | Skolkovo Institute of Science and Technology |
Sautenkov, Oleg | Skolkovo Institute of Science and Technology |
Poliakov, Vladimir | Skolkovo Institute of Science and Technology |
Kopanev, Pavel | Skolkovo Institute of Science and Technology |
Tsetserukou, Dzmitry | Skoltech |
Keywords: Haptic Feedback Technology, Teleoperation Systems, Haptic Interfaces for Telerobotics
Abstract: The shape of deformable objects can change drastically during grasping by robotic grippers, causing an ambiguous perception of their alignment and hence resulting in errors in robot positioning and telemanipulation. Rendering clear tactile patterns is fundamental to increasing users' precision and dexterity through tactile haptic feedback during telemanipulation. Therefore, different methods have to be studied to decode the sensors' data into haptic stimuli. This work presents a telemanipulation system for plastic pipettes that consists of a Force Dimension Omega.7 haptic interface endowed with two electro-stimulation arrays and two tactile sensor arrays embedded in the 2-finger Robotiq gripper. We propose a novel approach based on convolutional neural networks (CNN) to detect the tilt of deformable objects. The CNN generates a tactile pattern based on recognized tilt data to render further electro-tactile stimuli provided to the user during the telemanipulation. The study has shown that using the CNN algorithm, tilt recognition by users increased from 23.13% with the downsized data to 57.9%, and the success rate during teleoperation increased from 53.12% using the downsized data to 92.18% using the tactile patterns generated by the CNN.
|
|
16:30-16:45, Paper SaCT2.4 | |
Metaverse for Safer Roadways: An Immersive Digital Twin Framework for Exploring Human-Autonomy Coexistence in Urban Transportation Systems |
|
Samak, Tanmay | Clemson University International Center for Automotive Research |
Samak, Chinmay | Clemson University International Center for Automotive Research |
Krovi, Venkat | Clemson University International Center for Automotive Research |
Keywords: Digital Twins and Simulation, Virtual Reality (VR) Integration, Autonomy and Intelligent Systems
Abstract: Societal-scale deployment of autonomous vehicles requires them to coexist with human drivers, necessitating mutual understanding and coordination among these entities. However, purely real-world or simulation-based experiments cannot be employed to explore such complex interactions due to safety and reliability concerns, respectively. Consequently, this work presents an immersive digital twin framework to explore and experiment with the interaction dynamics between autonomous and non-autonomous traffic participants. Particularly, we employ a mixed-reality human-machine interface to allow human drivers and autonomous agents to observe and interact with each other for testing edge-case scenarios while ensuring safety at all times. To validate the versatility of the proposed framework's modular architecture, we first present a discussion on a set of user experience experiments encompassing 4 different levels of immersion with 4 distinct user interfaces. We then present a case study of uncontrolled intersection traversal to demonstrate the efficacy of the proposed framework in validating the interactions of a primary human-driven, autonomous, and connected autonomous vehicle with a secondary semi-autonomous vehicle. The proposed framework has been openly released to guide the future of autonomy-oriented digital twins and research on human-autonomy coexistence.
|
| |