Survey

Survey

  1. BACKGROUND

My research explores the theoretical and technical basis of robot-enabled embodied telework, i.e., telework with physical embodiment. It builds on human-robot sensation transfer, i.e., a method of converting robot sensing data into human-perceivable sensations in robot teleoperation via mixed reality and force simulators. It also includes simulating unusual sensations, such as animal sensations, on a human body. The findings will greatly promote the efficacy and safety of physical telework, especially for those involving intensive motor activities and in-person presence, such as construction and manufacturing. This MCA project will substantially enhance my research in this area by providing critical time and resources for deepening my knowledge in control theory and building a stronger network in disciplines of robotics and cognitive sciences.

Our world has fundamentally changed since COVID-19. Industries, policy makers and scholars are rethinking the way we live and work in an uncertain future. According to US News, during the peak lockdowns of the pandemic, more than 70% of Americans had to work from home5,9. Early evidence shows that telework could add benefits such as improved work-life-balance10, while more studies are concerned about the long-term impact of telework, including the unequal access to resources11. Despite the ongoing arguments on what types of works could permanently transition to telework, certain works can hardly be virtual. Construction and manufacturing are representative examples that still mainly rely on in-person presence, among other service industries (e.g., food service), posing unexpected risks (physical and mental health) on the professionals in these industries. Since Q3 2021, the US has started to obverse a significant labor shortage in many labor-intensive and service-driven industries12, indicating the beginning of a more complex future of the labor market. Innovative methods for enabling telework for labor-intensive professionals are in an urgent need.

            Telework, i.e., a work arrangement that allows an employee working at an alternative location, usually homes, has drawn a great attention since the 1990s, attributed to the advancement in telecommunications and computing technologies (Fig.1)13,14. When used properly, it shows proven benefits to the organizational efficiency, career accessibility, employee quality of life, occupational safety, and sustainability due to the reduced commute needs15-17.  Meanwhile, it is also noted that many professionals can be disqualified for telework, such as labor jobs that require a constant physical presence18. Recently, efforts are made to explore the use of avatar robot as a possible solution for a more engaging and enabling telework19. Some efforts target on the use of social robots for improving social awareness in telework (Fig.1)20,21, while others are exploring teleoperation robots for labor-intensive operations22-24. Preliminary efforts of robot teleoperation have shown the promising outcomes. For example, a recent review study25,26 provides an enormous number of cases of teleoperation robot in oil and gas industry. It helps reduce the personnel on board (POB) time and improve safety25,26. Telesurgery via robot has also been tested when surgeons cannot present in person27.

           My research contributes to robot-enabled embodied telework for future construction3. Construction industry has a significant economic and societal impact, representing $2 trillion value and over 7 million employment in the US28. Nonetheless, this industry has long suffered from the lack of efficiency, unsafe records, and labor shortages29-31. Long before the pandemic, leaders in the industry have already noticed the criticality of transitioning to a digital and automation future32. This is driven by the unprecedented complexity of future construction projects with increasing cognitive and physical demands, such as smart plants, civil infrastructure renewal, and construction in altered spaces (e.g., underground, and subsea)33. Recently, there is a substantive growth of literature in construction teleoperation robots, such as snake robots for inspection34-36, brick-laying robots37, drones for scanning38,39, and robotic arms in precision operations40. Teleoperation robots can reduce potential risks and make hazardous workplaces more accessible to construction workers41. Robots can also prolong construction workers’ career longevity and improve gender equity42.

 

  1. MOTIVATION

Despite the potential benefits of robot teleoperation for future construction works (and for the next pandemic), the challenge is overwhelming. Construction operations are extremely difficult. They require a strong situational awareness43, sophisticated sensorimotor coordination in complex movements44, and intensive motor activities45. In addition, given the diversity and variability of construction tasks, it is difficult to reach a consensus on a one-size-fits-all robot teleoperation solution for construction46,47, aside from the concerns on privacy and security on a mixed human-robot team48. None of these can find a perfect solution in current robot teleoperation technologies. Particularly, the insufficient design of human-robot interaction (HRI) can impair human situational awareness in robot teleoperation, especially for complex construction tasks. Fig.2 shows our early efforts of integrating mixed reality (MR) as an HRI for robot-based construction telework. Results showed that the missing embodiment of haptic feedback caused many overactions in assembly tasks.

In the past five years, my team has worked extensively on creating a new method, system, and measurement method for breaking the sensory and perceptional boundaries between human operators and the remote robot in teleoperation. We call it Human-Robot Sensation Transfer. It builds on MR platform that simulates remote workplaces based on reality capture and precise physics-based rendering. Different scales of haptic simulation, from tactile gloves to a whole-body haptic suit, as well as exoskeleton-enabled active force emulators are integrated to create a holistic feeling of physical interactions at the remote work interface. Altered sensations only seen in certain animals, such as snake and fish sensations about space, have also been simulated in a human-perceivable way (therefore we call it transfer). Successful applications have been tested for snake robot in-pipe inspection, underwater robot construction, remote robot facility maintenance, and exoskeleton training without a real exoskeleton (transferring the feeling of wearing it).

 

 

  1. CANDIDATE’S PAST RESEARCH

My past research has focused on establishing the architecture of the proposed human-robot sensation transfer (Fig.3), as well as a variety of proof-of-concept studies for representative construction operations. The key components include: i) a Unity and Robot Operating System (ROS) platform that enables Mixed Reality (MR) based robot teleoperation, with a data infrastructure and analytical pipeline for sensor data fusion; ii) a sensory augmentation simulator that incorporates haptic, tactile, force and visual stimulations for the teleoperation of multi-morphic robots – including industrial manipulators, exoskeleton, snake robot and underwater robot; iii) a workplace digital twin model that based on reality capture data for enriched physics simulation; and iv) a human assessment system based on neurophysiological analysis of operator’s performance and functions to examine benefits and potential hazards (such as cognitive overload and loss of situational awareness) of the proposed sensation transfer method. It also includes a predictive intelligence method that could make the robot teleoperation system more customized to each person’s unique cognitive and behavioral characteristics. In the remainder of this section, the overview, significance, my role and gaps in these studies are introduced.

 

Fig.3 Architecture of human-robot sensation transfer for robot teleoperation

  • Study 1: Virtual Reality Telepresence for Robot Telework

Overview: Publication:3. In this study, we built and tested a VR-based robot teleoperation system called Telerobotic Operation based on Auto-reconstructed Remote Scene (TOARS). Fig.4shows a VR user using two handheld controllers to operate a remote Baxter robot for pipe fitting. The system features two functions: converting VR user actions to robotic position targets and rebuilding the 3D scene in VR from the point cloud data collected by the remote robot. Rosbridge is used to provide a JSON API for transferring data between ROS and Unity49. ROS server converts robotic dynamics data into JSON messages via rosbridge and publishes it to our IP address or receives JSON message from Internet49,50. An RGB-D camera (Microsoft Kinect v2 camera 51) is mounted on the Baxter’s head to capture point cloud data and save it on the ROS platform. Based on the captured raw point cloud data, PointNet 52 is utilized to segment the scene, and classifies point cloud data into objects with pose information. Each object is then recognized as stationary, or dynamic objects, based on the kinematics features. The detection results are then used to replace the raw point cloud data with virtual objects in VR from a prefab library.

Significance: The unique contribution of this work is: we developed a real-time digital twinning method in VR for teleoperation, instead of using raw imagery data or point cloud data. This is critical for the improved situational awareness for human operator. Most robot teleoperation interface is based on imagery data (e.g., video streaming), causing limited field of view (FOV) and increased cognitive burden for processing spatial information. VR has been tested to provide a more immersive sense of presence, or telepresence. We recognize that excellent efforts have been done to test VR for robot teleoperation, such as53-55, and the use of scanning (e.g., depth camera and LiDAR) in addition to the camera views to provide immersive and intuitive feedback to human operator, such as56-58. Nonetheless, we identified two issues with this approach: First, the significant size of point cloud data makes the processing and transfer between robot and human operator difficult and slow. Second, point cloud models do not contain physical properties such as weight and colliders.

Our study presents a more enriched telepresence experience based on game engine that collects, processes, transfers, and reconstructs remote scenes as virtual objects in VR. The system utilizes deep learning to automatically detect objects in the captured scene, along with their physical properties, based on the point cloud data. The processed information is then transferred to Unity where rendered virtual objects replace the original point cloud models. Only the critical information is transferred for fast and high-fidelity digital twinning. Our test cases showed that our system could reduce data transfer need from ~80,000 points/frame to ~ 30,000 points/frame and increase rendering refresh rate from 5 FPS to 12 FPS. It also enabled a better utilization of the physics engine simulation in VR for physical interaction simulation (e.g., predicting collisions). The latency reduction and physics simulation had a significant influence on the interactive motor decisions of human operator.

My Role and Funding: As the PI of this study, I recruited students with interdisciplinary backgrounds (mechanical engineering, electrical engineering, and multimedia communications) to start this project in 2018. The VR-based robotic control methods led to a successful NSF NRI Award# 202478459, and contributed to another two NSF Convergence Accelerator Awards # 193705360 and # 203359261 for which I serve as a Co-PI. These projects all feature the use of VR as a training simulator for future workers on robot operations.

Remaining Gaps: Our approach in study 1 relies on position-based dynamics control, i.e., transferring target position XYZ data for the remote robot control. There is a gap when force control is needed. For example, if the goal is to simulate the contact between a robot end effector and an object in the virtual environment, it should calculate the virtual motion resulting from the balance of the measured wrench applied by the user to the end effector, , and the simulated wrench computed in VR from the object to the user’s hand, . The simulated wrench, when applied to the user by a haptic device, will provide the experience of physically contacting the environment that the user sees in VR. The second output needed to the controller is the simulated motion of the virtual object, (, , ), caused by the measured wrench from the user, to provide the realistic experience of interacting with the virtual object. Filling this gap, i.e., moving from position-based dynamics control to force-based dynamics control, requires a deeper knowledge of control theory.

 

  • Study 2: Transfer Force Sensation of Industrial Manipulator to Human Operator

Overview:Publication:62. This study represents our effort to expand study 1 and to fill some important gaps we identified. In this study, we replaced the handheld VR controllers with a force feedback device in robot teleoperation. The control method was also changed from position-based estimation to model reference controls, i.e., relying on the results of physics engine. The later one provides force feedback (i.e., rotating torque) to the human operator. As an example shown in Fig. 5, we simulated the task of a remote mobile manipulator (robotic arm) shutting down a facility following a required sequence. The robot was equipped with cameras and force sensors, so it can provide the first-person view (FPV) and force information to the human operator. The human operator wore a VR headset for the FPV from the remote robot.  We used a 6-DOF force feedback device TouchX63 to reproduce the sensed force from the robot and control the end effector of the remote robot. We performed a human-subject experiment (n=34) to test two force feedback conditions: Realistic (the system replicates the exact same level of the torque in valve manipulation) and Mediated (the simulation reduces the force on the human operator end by 50% to enable more flexible controls). Data was collected with eye-tracking, neuroimaging (functional near-infrared spectroscopy, fNIRS), motion analysis, and NASA TLX surveys. The results indicated that the mediated force feedback in bilateral telerobotic operation helped a more accurate operation, increased dual-tasking, reduced cognitive load and led to more efficient neural functions; yet it caused irregular actions of participants, showing as dramatic changes in valve rotating speeds. The findings suggest that the force feedback design of telerobotic systems should be more carefully thought through to balance the advantages and disadvantages.

Significance: This work expanded our efforts in leveraging human haptic sensory channels in robot teleoperation. Following this work, we further programmed another six types of physical interactions for an enriched haptic experience, including weight, surface texture,momentum, inertia, impact and mass balance (Fig.5). Many required a repurpose of the off-the-shelf haptic controllers, including 3D printing certain hardware components. Adding a list of enriched physical interaction effects via high-end haptic devices would add new method for the bilateral controls. We also found that each person may possess a different threshold for the maximum force feedback in teleoperation controls. Design of force feedback simulation may also affect neurofunctions in a sophisticated way, which deserves a further investigation.

My Role and Funding: As the PI of this study, I worked with my PhD students and two undergraduate researchers on designing the roadmap of haptic simulator. I also directly participated in the 3D print design to repropose the haptic devices. This work has led to a successful NASA Award# 80NSSC21K081564.

Remaining Gaps: We observed a noticeable delay in robot teleoperation with our haptic system. This could be due to the lack of optimization of the force-based control algorithm. We also need to design automated functions that can optimize different modalities of force feedback and magnitude based on individual preference. All require an in-depth knowledge of control theory.

 

  • Study 3: Transfer Spatial Sensation of Snake Robot to Human Operator

Overview: Publication:1. We designed a haptic system for snake robot teleoperation for in-pipe inspection (Fig.6). Based on the common helical locomotion snake robot models, an upper-body haptic suit with 40 vibrators on both the front and back sides of the human operator was developed to generate haptic feedback corresponding to the orientation of the snake robot, transferring the egocentric spatial sensation of a snake robot to the human operator. In other words, we simulated the “feeling” of a snake crawling inside a pipeline on the body of a human operator. Specifically, the front and the back of the haptic suit correspond to the pressure sensor data from the belly and the back of the remote snake robot respectively, projecting the contact events between the robot and the inner walls as dynamic haptic cues on operator’s body. A human-subject experiment (n=31) was performed to evaluate the efficacy of the developed system. The results (Fig.7) indicate that the proposed haptic feedback outperformed other feedback methods in task performance (navigation decision speed and accuracy) and subjective workload and motion sickness. However, multimodal feedback, i.e., providing visual and haptic cues together, worsened the performance. It deserves a further exploration.

Significance:  This work is unique because it proposes a novel animal-to-human sensation transfer method that could potentially augment human abilities in altered workplaces. Helical drive robots, typically designed as snake robots, are more adaptable to the altered in-pipe conditions and are considered an effective approach for maneuvering inside the pipelines65. But the snake robot teleoperation is sensitive to the human operator’s ability to maintain spatial perception66. Several factors can affect human operator’s spatial perception. The limited field of view (FOV) from the robot-carried camera causes the so-called “keyhole” effect and the inability to orientate because of the lack of spatial features67. Orientation can be challenging also because of the demand for mental rotation, i.e., planning for complex locomotion such as rotating the posture of the robot68. A mismatch between viewpoints of the human operator and the original locomotion intent often happens, leading to unexpected consequences, such as a remote robot flipping over or rolling to the wrong directions/gestures69. The degraded video signals due to long distance, obstacles, or electronic jamming70, may also compromise the human operator’s ability for distance and size estimation71. Relying on the visual cues and manual control/input can affect the psychomotor processes of the human operator, causing motion sickness72.

Evidence shows that most reptiles don’t solely rely on vision for spatial perception and navigation in confined areas73. Python snake, for example, senses the vibrations on their skull to navigate to the target74. In-pipe environment can be an altered workplace for most human workers, and it requires a new experience of sensation. The haptic suit is a bio-inspired solution. Our preliminary finding suggests that a haptic assistant system indicative of the direction of the gravity helps improve the spatial awareness in feature-sparse spaces. The haptic assistance seems to be more beneficial than the visual cues in many aspects. It could be because that the visual channel of the human operator is usually already occupied in the teleoperation task, and any additional information would add more mental workload and the perceived motion sickness due to the excessive use of the visual channel. In contrast, the haptic assistant system leverages a sensory channel that is less used, i.e., the tactile sensation, and thus becomes a more intuitive way for the human operator to orientate at the critical junctions (shown as the less time used at the junctions).

My Role and Funding:  As the PI of this snake robot study, I worked with my students on designing, programming, and testing the haptic system. We resolved difficulties such as determining the optimal level of vibration magnitude and position distribution. The preliminary results helped me secure a NSF Future of Work and Human-Technology (FW-HTF) award #212889575.

Remaining Gaps: We still don’t know the exact role of multimodal sensory processes in spatial perception. Shown in Fig.7, we found that the combination of visual and haptic modalities worsened the overall performance in terms of navigation decision time. This may be due to the information overload and modality switching consumption during the task. But meanwhile, multimodality seemed to reduce the perceived motion sickness, which is desired. The benefit of using multimodality is mixed. All of these make the design of haptics-based HRI for snake robot teleoperation nontrivial. In addition, we designed a method for maneuvering the snake robot via body motions. This was intended to release hands from holding the joystick controller and testing a new intuitive robot control method. But in pilot test most participants complained that the different orientation between their body gesture (sitting position) and the remote snake robot (crawling position) created additional difficulties. More works are needed based on cognitive sciences.

 

  • Study 4: Transfer Hydrodynamic Sensation of Underwater Robot to Human Operator

Overview: Publication:2. This study expands study 3 to test methods for transferring hydrodynamic sensations from a remotely operated underwater vehicle (ROV) to human operator in subsea operations. As shown in Fig.8, we mapped the sensor data from 40 locations on a ROV to the front and back of human operator via the same haptic suit. The sensor data is collected from pressure sensors equipped on ROV and Doppler sensors for far field mapping. The magnitude of vibration reflects the near-field hydrodynamic features. While the far field features are rendered as flows of particles in the VR headset. The haptic analog of the water flow would help the operator in critical ROV operations such as stabilization, navigation, and docking. In other words, we simulated the “feeling” of fish swimming underwater on the body of a human operator.   

Significance: This work is unique because it explores the possibility of animal-to-human sensation transfer that could grant humans unusual sensations. For decades, subsea exploration and operations have been mostly carried out with the help of ROV76. But the teleoperation of ROV can be very challenging and risky due to the mismatch between the subsea workplaces and the daily experience of the human operator. Usually, technicians from above sea level take control of the whole system to accomplish complex tasks with the help of live video streaming77. The complexity of subsea environment, such as the rapidly varying internal currents, low visibility, unexpected contacts with marine lives, may all undermine the stability of the ROV, or the stabilization controls78. Although nowadays most ROVs are equipped with a certain level of self-stabilization and self-navigation functions79, for complex engineering tasks, human controls are still often needed. Human sensorimotor control relies on multimodal sensory feedback, such as the visual and somatosensory cues, to make sense of the consequence of any initiated action80. In ROV teleoperations, the lack of human ability to perceive various subsea environmental and spatial features, such as the inability to directly sense water flows and pressure changes, can break the loop between motor action and feedback, leading to an induced perceptual-motor malfunction81. Although in practice these parameters can be collected and displayed via monitor screens, it requires a significant training effort to comprehend and react in a timely manner. Even a highly experienced operator can make mistakes when exposed to an unfamiliar environment, especially in high stress or tired mental situations. This work expects to grant ROV operators the ability to see, to hear and to feel the system with multisensory capacity in an intuitive way.

My Role and Funding:  As the PI of this ROV study, I worked with my students on designing and programming a prototype of the haptic system. I have also initiated the partnership with American Bureau of Shipping (ABS), and ROV researchers from University of Hawaii. This work is supported by NSF Future of Work and Human-Technology (FW-HTF) award #212889575.

Remaining Gaps: The first challenge we are still facing pertains to the sensorimotor perception difficulty in subsea workplace. We still need evidence about the efficacy of this haptic design, as subsea environment represents a completely altered experience to most human subjects. Most people don’t have previous experience of feeling hydrodynamic features underwear. This design may not be sufficient to help counter the perceptual-motor malfunction The second challenge pertains to the difficulty of robot maneuver in subsea workplace. At present, there is an obvious translation need between the control devices (e.g., joystick or keyboard) and ROV locomotion. It is also difficult for self-orientation in the subsea workplace because of the lack of anchoring landmarks. We propose the use of motion capture to help translate natural human motions to underwater robot locomotion. These efforts will require a deeper understanding about control theory.

 

  • Study 5: Human Assessment Framework

Overview: Representative publication82: In the past five years, my team created and deployed a holistic human assessment system. It incorporates neurofunctional monitoring and analysis with fNIRS, VR-embedded gaze tracking and pupillary analysis, motion tracking, and subjective evaluation methods. We have also developed a complete analytical pipeline for data preprocessing, data fusion and online prediction of cognitive status (Fig.9). We successfully applied this system in a variety of our studies1,62,82-96.

Significance: This track of work provides interpretability for the observed performance data in our robot teleoperation studies. In addition, the real-time human assessment function makes an adaptive and personalized sensation transfer system possible. We found there was no off-the-shelf solutions for us to track and analyze cognitive and behavioral data in a motor-intensive VR environment. When we started this track of work, VR-embedded eye tracking was not commercially available. We contribute to the literature by: (1) Exploring a neurophysiological data fusion process that combine features of neurofunctional data, eye tracking data, motion data, and subjective evaluation. The difficulties are multifold such as the distinct sampling rates of different tracking sensors (e.g, fNIRS refreshes at 7-10 Hz, while eye tracker refreshes at 90-120 Hz). We proposed an automated event-driven analysis method that utilizes features extracted from motion data to synchronize all sources of data. Time series machine learning was also tested for a better predictability. Methods published in82,93; (2) An online prediction method for real-time analysis of cognitive load. We selected fNIRS for cognitive status monitoring because is more tolerant than EEG to motor artefacts and thus is a better candidate for teleoperation tasks that motion-intensive. The challenge is that fNIRS is a relatively new neuroimaging method, and the literature is not clear regarding what features of fNIRS data are strong predictors of cognitive load (while in EEG literature, the alpha–gamma versus theta–gamma ratio is a golden standard for cognitive load97). In our research, we applied a feature extraction method and obtained top time series features from more than 200 candidates (every 20s interval). Then we trained 6 machine learning models to predict cognitive load levels (low, medium, and high) measured by the task difficulty, NASA TLX survey and other physiological data. As for eye tracking, we have found three eye-tracking indicators to be most relevant to attention prediction and cognitive overload warning. Finally, all these metrics, as well as ECG features, are integrated into a comprehensive prediction model of cognitive load prediction; and (3) A cloud-based, open-source cognitive data sharing and analysis platform for continuous machine learning model training. Finally, we integrated all functions into a data sharing and online analysis website based on Google Firebase. The so-called “CogDNA” project98 is online alive.  

My Role and Funding: I spearheaded the design and development of this multisource human assessment framework. Early efforts can be traced back to 2015 when we attempted to save motion and eye tracking data collected from an Oculus DK2 headset when VR-based eye tracking product was not available. The early effort was supported by a NIST award# 60NANB18D15299, and later, NIST award# 70NANB21H045100, both focusing on cognition-driven VR and AR technologies for search and rescue. At present, the human assessment system is utilized in all my funded research projects.

Remaining Gaps: We recognize the complexity of human cognitive process. Especially, during a teleoperation task, the decision-making process, motor coordination, situational awareness, and possible mental and physical fatigue all play an important role and require a close monitoring. However, we only have access to very limited training dataset, due to the cost and logistics of performing neurophysiological data collection. As a result, we may have experienced overfitting issues in our cognitive models. Our plan is to leverage the tools in cognitive sciences for a more efficient use of the cognitive data collected from our experiments.

 

 

  1. CANDIDATE’S PROPOSED RESEARCH ADVANCEMENT AND TRAINING PLAN

Fig.10 shows the timeline and products of the plans.

Fig.10 Research and training advancement timeline and products

  • Product 1 (Q1 2025): A documented force-based control method for the proposed generic haptic controller model, it includes the controller solving process, and sample codes for an example controller.
  • Product 2 (Q4 2026): Task analysis report for facility operation and maintenance robot applications, along with the robot design requirements.
  • Product 3 (Q3 2026): A recommended embodied cognition evaluation framework for assessing the efficacy of human-robot sensation transfer applications.
  • Product 4 (Q4 2026): Final training outcomes, including co-developed course modules (see training plan), student presentations on scheduled seminars; 1-2 papers on robot teleoperation designs and a proposal to NSF programs sponsored by the TIP directorate.

 

  • RESEARCH #1: Explore Control Method for a Generic Haptic Controller Model

Objective: This track of proposed research collaboration activities will help me explore innovative control methods for a computationally efficient force-based robot control based on a generic haptic controller model. This will ensure that the proposed human-robot sensation transfer can reproduce the force sensation with the best possible accuracy, while maintaining a steady and precise locomotion control of the remote robot. In other words, the challenge is how to make the haptic controller a high-fidelity force feedback simulator, as well as a precise control device, at the same time. I recognize the importance of examining other sensory channels as well, such as vision, audition, and olfaction. But in order to focus on the most immediate need of the system development and make the best use of the resources, this MCA project will focus on haptic sensation.

The implementation of techniques from model reference controls previously explored in study 2 is an innovation that significantly reduces the complexity of the controls problem for the robot teleoperation system. The largest remaining challenge in the haptic simulator on the human operator end must produce both the calculated interaction wrenches at the user’s hands, as well as the motions that satisfy the virtual motion constraints at the same time. For instance, a human operator is using our system to control a remote robot lifting a 10 lbs. payload at the speed of 1 m/s, and we desire to transfer the feeling of 10 lbs. weight via the haptic simulator to the human operator. The force-based control must ensure that the force topics we transferred from the haptic controller to robot can guarantee a 10lbs*1m/s momentum on the remote robot effector, while still maintain the residual force of 10lbs resistance so the human operator can feel the right amount of force feedback. To address this challenge, we will explore a multi-objective control techniques from the field of underactuated robotics101. Researchers have previously implemented convex optimization techniques to solve the inverse dynamics, contact constraints, and motion objectives present in dynamic locomotion102,103. We will extend these techniques to the proposed system, and pair them with local inertial compensation control on the robotic arms for the full completion of the controls tasks. The proposed approach addresses deficiencies in VR, haptic, and robotic control communities.

Workplan: I propose to visit Dr. L’Afflitto’s lab at VT for two semesters in year 1 and year 2 respectively. The visit will make sure an in-depth and continuous discussion on the design of a generic haptic controller, which should fit for all kinds of robot teleoperation tasks with the force sensation transfer function. Here we debrief the technical details of the proposed generic haptic controller designs. We assume a generic haptic controller with  joints, so it has  degrees of freedom. The configuration of the controller can be expressed by the state vector  that represents the vector of joint positions. The equations for the rigid body dynamics are then given by , where  is the joint space inertia matrix,  is the vector of centrifugal, Coriolis, and gravity torques,  is the vector of torques commanded from the optimization controller,  is the vector of torques from the inertial compensation controller,  is the wrench (combined vector of forces and torques) that the user applies on the robotic arm, and  is the spatial Jacobian at the end-effector, which relates joint-space and task-space motions. The generic haptic controller should have two main control tasks to accomplish. The first is to produce the desired virtual interaction wrenches,  at the end effector of the haptic controller. The virtual interaction wrenches are collected from the sensors equipped on the remote robot and processed by the physics engine in VR. The second task is to satisfy the motion constraints from interacting with the virtual objects to match the motion of the end effector in the real system, (, ), with the simulated motion of the hands (, ).

We foresee two challenges of this generic haptic controller design. The first challenge pertains to the perceivable tracking error between the real and virtual systems. The proposed QP formulation tracks the desired virtual motion but does so only on the level of end effector accelerations. This means there is a possibility for an error in the position trajectory to grow over time, particularly for slow, protracted motions. As this is a novel application of the force-based sensation transfer technique to a human-machine-interface, it is unknown how noticeable this tracking error will be. The sconed challenge is related to safety. As we are dealing with a human connected to a robotic system, the safety of the user is a priority in the control system. Before operation, safety settings can be defined such as maximum movement speed, maximum interaction forces, range of motion limits, etc. These safety limits are included in the controller as constraints on the QP detailed above. If any of these constraints is met, in order to maintain a realistic simulation, we will investigate the possibility of adding constraints to VR in order to reflect the safety limits.

 

 

 

  • RESEARCH #2: Propose Framework of Multi-Morphologic Robot for Construction Works

Objective: This track of proposed research activities will explore a working framework for extracting and classifying features of typical construction operations for design requirements for the future special-shaped robot. We envision that the most effective design of industry robot would not be humanoid. Instead, the morphology and mechanical functions should reflect the nature of the operations.  

Workplan: Designing future robot in terms of morphology and function, would require a throughout examination of the requirements, difficulties, and processes of future work. Given the vast diversity of construction types (residential, heavy civil, industrial facilities, underground), it is almost impressible to develop a one-size-fit-all examination. Without losing its generalizability, I will focus on the construction robot for industrial facility operation and maintenance, such as the power plant daily operations, maintenance, and renewal projects. This sector has suffered from serious productivity and safety issues. For refinery projects in the US only, there have been 152 documented major disasters since year 2000 related to poor controls, causing more than 40 deaths and even more injuries. There is a pressing need for a transformative technology echo the US government’s efforts to renovate crumbling infrastructure. To yield a systematic evaluation, I will perform task analysis, including, Hierarchical Task Analysis (HTA)93–95, Goals-Operators-Methods-Selection Rules (GOMS)96–99, and Cognitive Task Analysis (CTA)100,101 to analyze task and task-related behaviors from the proposed technologies. Given that these methods cover both task- and user-focused aspects of human-technology interaction, together they provide a holistic understanding of how the proposed sensation transfer influences robot operator’s decision-making and behaviors. Then, working with my partner, we will examine wide-accepted robot design framework, such as 104, to formulize the requirements on perception system, sensor system, kinematic features, morphology, planning system, actuator, processor, power, system, communication system, and management system etc.

 

  • RESEARCH #3: Examine Embodied Cognition with Our Technology

Objective: This track of proposed research activities will help me build a proper methodological process to examine the critical psychological implications of the proposed human-robot sensation transfer system. Specifically, I will work with Dr. Odegaard to examine embodied cognition. Embodied cognition refers to the cognitive process and physical interactions of the body with the world105,106. One of the most influential theoretical approaches of embodied cognition is Barsalou’s framework of perceptual symbol systems106. The framework suggests that embodied cognition is the process that humans use their sensory neural structures to create multisensory representations of their environment and humans can reconstruct their brain structures when they mentally imagine an object or action107,108. Another relevant theory framework is sensorimotor contingencies (SMCs)109. SMCs claims that the quality of perception is determined by the knowledge of how sensory information changes when one acts in the world109. A comprehensive computational account of how multisensory representations are formed can be found in applications of Bayesian Causal Inference110-112. These models assume that for an individual to infer whether any combination of sensory signals has a single cause or separate causes, sensory information in the current moment is combined with prior information from past experiences to make an inference about the causal structure that gave rise to the current situation.  These models have been quite successful in accounting for body ownership in artificial environments113,114. Here, we aim to extend these models to better understand the conditions in which participants experience immersive embodiment with robotic devices, by assessing not only the accuracy of their judgments, but periodic ratings of their subjective experiences during these tasks, too. Assessing embodied cognition in my research is extremely important to evaluate the efficacy of the proposed sensation transfer method for the future robot teleoperation, as the ultimate goal is to break the boundary between humans and remote robot and create the shared perception.

Workplan: I propose to work with Dr. Odegaard in year 3, after the generic haptic control method is substantially framed, to examine methods and processes for evaluating the embodied cognition. Specially, we will examine the most recent multisensory integration and SMCs frameworks to develop a comprehensive sensorimotor efficacy assessment framework for the proposed human-robot sensation transfer method. Representative frameworks include the dynamic sensorimotor contingency115 that differentiate SMCs process into four phases, including sensorimotor environment, sensorimotor habitat, sensorimotor coordinate, and sensorimotor strategies. These four phases describe how a person develop required sensorimotor functions according to measures of efficiency or level of skill. Another relevant framework116 decomposes sensory contingencies into sense of agency, sense of self-location, sense of body-ownership, physical extensions, functional extensions and phenomenological extensions. These frameworks will help us develop the assessment method and framework for evaluating the efficacy and safety of the proposed sensation transfer. To be noted, we recognize the success of presence measurements, such as Igroup Presence Questionnaire (IPQ)117, Slater- Usoh-Steed Presence questionnaire118, and Witmer & Singer Presence questionnaire119.  We found it to be valuable to develop a separate assessment framework that focuses on motor-centric presence.

 

  • TRAINING #1: Self-Training Plan

Objective: The self-training plan is to ensure to equip me with the necessary knowledge in control theory and perception science to continue my research in the area of human-robot sensation transfer. It will also include network building activities with disciplines outside my own under the supervision of the mentors.

Workplan: As mentioned earlier, I plan to visit Dr L’Afflitto’s lab at VT for two summer semesters. Along with the collaborative research activities, my stay at VT will also allow me to complete my knowledge preparation for my future research. The specific training plans include: 1) observing his following courses: ECE6774 “Adaptive Control Systems” and ISE 4264 “Industrial Automation”, and participating in the course projects; 2) co-preparing course modules related to control methods and robotic engineering, including dynamic control theory, adaptive control methods, humanoid robot design, human-robot interaction design method, and optimization methods for civil engineering students. The course difficulty level will be adjusted and the context of course contents will be customized to reflect the need of civil engineering; 3) co-publish 1-2 high impact papers in the area of control method, preferably directly from the human-robot sensation transfer research. I will strive to move away from my comfort zone and learn principals for publishing the robotics and control venues, such as IEEE and ACM conferences; and 4) co-develop a proposal to the NSF M3X program or similar programs funded by TIP related to human-robot interaction. We plan to explore new research questions directly from the human-robot sensation transfer research, ideally, related to the challenges about dynamic control method. In addition to the summer visit, I will meet with Dr. L’Afflitto on a bi-weekly basis via Zoom or other venues, to discuss the progress of my training, paper writhing and proposal development. Plus, starting from year 2, I will hold bi-weekly meetings with Dr. Odegaard, to discuss a new embodiment assessment method for the proposed method. I will observe his class EXP3604 “Cognitive Psychology” to learn processes for cognitive analysis. We will involve graduate students in these meetings for co-training experience. Meeting minutes will be used as the basis for future brainstorming. Other collaboration plans will be introduced in the following section.

 

  • TRAINING #2: Student Training Plan

Objective: The student training aims to broaden and deepening my students’ engagement in this cross-disciplinary research, equip them with the proper level of knowledge outside their own disciplines, and cultivate leadership for future research activities. 

Workplan: A novel feature of the student training plan is that three graduate students will be designated as the student Co-Leads for each year of the project, to enable the cross-discipline leadership development, communications, and convergence of project efforts, with internal (project team) and external clients (industry partners). In Year 1 the leads will be a student from VT Mechanical Engineering, since year 1 will focus on the control design for a generic haptic controller model. Year 2 will be led by a student from UF Civil Engineering, to examine the need of the future construction industry and brainstorm morphologies of the future teleoperation robot for construction teleworks. Year 3 will be led by a student from UF Psychology to concentrate on the pilot test of embodied cognition evaluation. The Co-Leads will also ensure that timely development and dissemination of broader impact efforts are included along with the research activities. This rotation in leadership will allow for each graduate student to understand activities and techniques across different disciplines. The specific coordination mechanisms are described below: Once every two weeks. The student leads will organize a Zoom group meeting with all student members to coordinate research activities and facilitate interdisciplinary collaboration. Once every month. The PI, senior personnel, student co-leads, post doc, and all students associated with the project will have a group meeting consisting of research activities discussions and a seminar. The seminar will feature a funded student presenting their research. Summer Research Exchange Program. Every summer, one funded student from UF will participate in a summer research exchange program, where they will spend one summer learning and conducting cross-disciplinary research activities under the guidance of the other research investigators at their labs. It is expected that when a student returns to their own lab at the end of the program, they will disseminate lessons learned and develop competencies to expand expertise of their respective labs. Annually. The PI will attend a yearly meeting at the NSF to gather insights into NSF’s mission on the convergence research and foundational robot research with other NSF PIs as well as to network and disseminate our findings to the scientific community. Additionally, the entire research team will meet at the annual FW-HTF workshop (funded by award #212889575) in Houston, TX, which will also be open to invited industry partners. The team will meet to disseminate research and broader impact findings and efforts through presentations (PI) and posters (students), discuss challenges and opportunities, and solicit feedback from industry partners.

 

  1. CANDIDATE’S LONG-TERM CAREER PLANS

Research: Moving forward, I will strive to deepen our understanding about telework in the future construction industry, which will drive vast improvements in the operational efficiency of the built environments and improve the quality of life of construction workers. I will continue building a vivid research program in human-robot collaboration (HRC) for labor-intensive industries. I will also examine and collect data about how robotic telework transforms the future construction workflows, levels up construction productivity and safety performance, and promote construction career equity and longevity. My continuous effort in this area will identify new methods, requirements, opportunities, and challenges of telework in construction and other labor-intensive industries. I will strive to build a sustainable research group, by continuously recruiting new graduate students and cultivating a collaborative network among my students. I also aim to take leadership positions in the research community, serving my colleagues in knowledge discovery, research collaborations, and outreach activities. I will begin initiatives in construction robot with my colleagues, to promote understanding and practices. I will also foster a much closer relationship between academia and industry, taking lessons learned from my own industry experience, and my current role of secretary of American Society of Civil Engineers (ASCE)’s Visualization, Information Modeling and Simulation (VIMS) committee.

Education: I am committed to preparing a technology-ready workforce for the future construction industry. My teaching focuses on automation and information technologies that will help the US construction industry position at the forefront of smart construction and digital decision making and helps prepare the next generation of leaders that appreciate the use of technologies in the construction industry. Adopting innovative technologies will be critical for next generation of construction leaders to gain a full understanding of the performance of future built assets, both during construction and throughout their design life. As a result, I will take technologies to the classroom to expand the fields of construction engineering for the construction professionals of tomorrow. I will dedicate in teaching a vast range of topics that will be the building blocks for a future technology-savvy workforce including informatics, visualization, simulation, and robotics. I aim to influence my students via activity-based teaching. Education is the arena for doing, practicing, modeling, gaming, involving and reaching out to individuals who are hired in the industry and have the capacity to change the industry. In addition to knowledge, this requires practical insight, instinct, and an ability to handle any exception. Therefore, I will help students effectively master construction management by having them work together in teams and by using the “learn by doing” approach. The teaching methods and materials resulted from my research to the student groups as well as others, will positively attract students’ interests in STEM fields. In collaboration with education researchers, I will engage students in solving real-world engineering problems.

Inclusion and Diversity: I will position my research for a more diversified construction workforce by creating human-robot sensation transfer technologies that can lower the career barrier for a highly professional and traditionally male-dominated area. The new job opportunities brought by my telework research will help ease the inequality in gender, with that more women will feel more welcomed because of the safer work environment created by the virtual telepresence. The current construction jobs also pose strict requirements on age, because of the mental and physical load in certain tasks. The sensory augmentation method for robotic control to be developed by my research will mitigate the age requirement, promoting career longevity. The new technologies developed by us will also help salvage the careers of experienced construction and engineering workers who have suffered from career injuries and/or occupational diseases. Furthermore, because my investigation will focus on reskilling and upskilling the workforce that is threatened by automation, lessons learned in this project will help workforce transformation and secure job opportunities for US workforces in various industries. To further ensure the equality of my future research, I will consult data from the US Bureau of Labor Statistics in studies, to make sure that the sampled population is not demographically or regionally biased. I will prioritize the recruitment of underrepresented students to fill the RA positions in my research group. In addition, I will seek funding opportunities such as REU to support underrepresented student groups in lab interns.

 

  1. BROADER IMPACTS

The broader impacts of this project target academia, industry, and the general public. Integration of research and pedagogy for cross-disciplinary participation in engineering. The proposed research and training activities support teaching modules (e.g., “construction robotics”, “adaptive control method for construction robot”, and “optimization method for human-robot interaction”) in multiple courses at UF, including CGN6905 Adv. Information Technologies in Construction, CGN6905 Construction Modeling and Simulation. In addition, the PI and partner will work closely to co-teach a cross-disciplinary course module “Construction Robot” involving civil engineering and mechanical engineering students. Create Opportunities for Professional Development of Hispanic Students. The US construction industry hired more than 7.3 million people in 2018120, of which 31% were Hispanic121. Every year only about 2,000 college degrees in construction are awarded and the rate keeps decreasing122. The engagement of Hispanic students is critical in this project. This project will provide a RA position (included in budget) for a Hispanic student to enroll in civil engineering at UF. The student is expected to be the liaison with the Society of Hispanic Professional Engineers (SHPE) at UF to promote STEM awareness of Hispanic students. Engage with workforce network: This project will utilize a funded consortium “Subsea Generation Initiative – SGI” with industry partners from the subsea industry, to disseminate knowledge gained from this project and promote interests in the subsea construction. The outcome of the research will be integrated into an online course module that will be free of charge for the industry and disseminated via the UF Electronic Delivery of Gator Engineering (EDGE system). It will integrate a series of short videos, as evidence shows that short videos are more effective than long videos in catching people’s attention and fostering learning results123. We will also work with TEDx (PI was a TEDx speaker) to present our work to a large audience with interest in technology, engineering, and design. Broadening workforce participation: The proposed robot teleoperation method can also empower new populations of workers (e.g., women), and potentially allow older or injured workers to continue, thereby reducing skills gaps and improving subsea engineering and science exploration efficiency. We will intentionally recruit women and later-age career seekers in studies. Findings can also provide insights on continuous human adaptation to teleoperation robot as we deepen understanding on human-robot shared perception. Moreover, education requirements for robot applications, solicited from stakeholders, can generate recommendations for future workforce training with robotic augmentations. The VR simulators (for robot teleoperation tests) developed in the project will be transferred to the labor service providers and customers to test immersive trainings in robot technologies. Promote Student Research Experience and STEM Education via Virtual Reality. Two Ph.D. students and two undergraduate students will work on this project with different focuses. The thesis topics include understanding robotic design requirement for the future of construction, and VR technologies for automation adaptation. The project will involve strong advancement of VR technologies in promoting STEM teaching. Research124 finds that the likelihood of earning a STEM degree is directly related to a student’s visuospatial ability, while research125-127 also shows that VR technologies significantly increase the buildup of visuospatial ability. The VR technologies will have a profound impact on 50 STEM students per semester via the proposed course modules.

 

  1. PRIOR NSF SUPPORTS

PI, (a) FW-HTF 2128895, $1,457,425, 12/1/2021-11/30/2025, (b) FW-HTF-R/Collaborative Research: Human-Robot Sensory Transfer for Worker Productivity, Training, and Quality of Life in Remote Undersea Inspection and Construction Tasks (c) Intellectual Merit: VR-based telepresence technology for underwater robot controls. It also includes new educational materials for workforce transformation, and the corresponding economic impact analysis. (d) Broader Impacts: VR for lowering robotic technology adoption; promote career equity in the maritime industry. (e) Publications:1,2. PI, (a) CMMI 2024784, $312,985, 9/1/2020-8/31/2024, (b) NRI: INT: ForceBot: Customizable Robotic Platform for Body-Scale Physical Interaction Simulation in Virtual Reality. (c) Intellectual Merit: Exoskeleton-based force simulation platform for VR. Promote knowledge in VR presence studies. (d) Broader Impacts: Broaden the use of VR for training. Promote VR for STEM education in engineering. (e) Publications3,62.  PI, (a) HDBE 1937878, $416,677; 6/1/2018-5/31/2021, (b) Personalized Systems for Wayfinding for First Responders. (c) Intellectual Merit: It examines a real-time cognitive load prediction system based on EEG and fNIRS; It builds a cognition-driven automated wayfinding information system for search and rescue; it promotes information processing in disaster management. (d) Broader Impacts: First responders engagement in technology; VR for STEM education for general public. (e) Publications:83,85-87,90,91,128-134. Co-PI, (a) C-Accel 2033592, $4,996,958; 9/1/2020-8/31/2023, (b) B2: Learning Environments with Augmentation and Robotics for Next-gen Emergency Responders. (c) Intellectual Merit: A mixed- reality based training platform for first responders in human augmentation technologies adoption, including exoskeleton and augmented reality headsets. (d) Broader Impacts: Advancing training for first responders; promote VR technologies for trainings in other industries such as manufacturing and construction. (e) Publications:95.