ÁùºÏ±¦µä

Research projects

Student-centered personalized learning framework to advance undergraduate robotics education

Project lead: David Feil-Seifer

Summary: The goal of this project is to develop supplemental content related to robot navigation content and Linux embedded system management for a larger project on undergraduate robotics education. These students will study existing robotics courses, along with first-year programming courses they have already taken to develop content related to the above topics that are appropriate for first-year students. In particular, we are looking for hands-on labs where students will program a simulated robot and provide feedback on how the robot’s behavior utilizes sensors, actuators and planning to achieve its goals. We also want to understand how to build and configure a simple robot, particularly the software configuration of such a system to study robot behavior in simple settings in the real world. These materials will allow undergraduates to participate in the course content for the project and set them up for later work that will study how to best provide automated feedback to students completing those exercises.

Student involvement: Students will take the syllabi from first-year CS programming courses, Introduction to Programming, Computer Science II, and Introduction to Robotics to determine relevant navigation material and connections to the programming courses. This will help to establish prerequisite programming content and their relationships to learning outcomes in the robotics materials. They will prioritize the robotics content for industrial partners’ assessments of workforce needs. They then will adapt exercises from robotics courses to our online platform. If the results are publishable, the students also will help write and submit the paper. If changes are recommended from early testing, the students also will work with project staff to update course materials accordingly.

Efficient human-robot interaction and collaboration for remotely situated agents

Project lead: Alireza Tavakkoli

Summary: Situational awareness is required in human-robot interaction for non-collocated operators. Prior work yielded an integrated physical-virtual environment for remotely operating and collaborating with robotic agents. This enabled the study of effective modeling and visualization of the remote sensory data, integrating human and robotics kinematics, designing efficient user interfaces for integrated physical-virtual interactions, and modeling of human intent while performing structured tasks. This work will guide the design of efficient architectures to generate virtual representations of the remote environments from 3D reconstruction data, supporting "virtual interaction" by representing object-level semantics and relationships for human operator situational awareness of the remote robot's environment.

Traditional human-robot collaboration over long distances is challenged by high-latency or intermittent communication between the human and remote robotic agents. Immersive VR environments for the end-user are becoming more common for static environments, there is still a need for establishing frameworks that handles the global and dynamic structure of the environment, allowing to semantically segment its contents and encode object-scene hierarchies, and enable VR-based interactions. This project will investigate dynamic, on-the-fly generation of the robotic agent's remote environment with high levels of fidelity required to enable effective situational awareness for remote human collaborators.

Student involvement: Students will either: observe current lab research in human-robot teaming and public space research, and learn how to replicate that work; or utilizing the established architectures for remote HRI from our prior work. The students will then design their own study or analysis based on experiment design observed from this literature review and then modify existing architecture's code to account for changes to the experiment design and conduct the necessary experiments with participants in public space settings.

Socially-aware navigation in public spaces

Project lead: David Feil-Seifer

Summary: The goal of this project is to investigate how human-robot interaction is best structured for public space interaction. Prior NSF-funded research has yielded a socially-aware navigation planner for autonomous navigation near people. We are also leveraging multiple public spaces, such as museums, academic buildings, and schools for studying the effects that socially-aware navigation has on people observing/interacting with those robots. In order for robots to be integrated smoothly into our daily lives, a much clearer understanding of fundamental human-robot interaction principles is required. This project will study how socially appropriate interaction and, importantly socially-inappropriate interaction can affect human-robot collaboration.

Human-human interaction demonstrates how collaboration between humans and robots may occur. While there has been an explosion of research into HRI over the last decade, a large majority of this work examines short-term HRI scenarios. Furthermore, studies that have examined long-term HRI scenarios have been very specialized in nature. We are exploring a more general approach to studying HRI for public venues. Early results have shown that we can use the perception of social intelligence (PSI) as a measure of perceptions of robot social performance. We have also observed pitfalls for long-term adoption of robots in group settings. In order to explore the nature of collaborative HRI for public settings, we will work with REU participants to develop controlled experiments which isolate individual aspects of social appropriateness such as social rule-following, goal orientation, social reasoning, and deference. Students will examine these factors in a single-session experiment that they design for a museum environment. The results from these experiments will be used to focus multi-session follow-up studies that will be conducted in-between summers.

Student involvement: The undergraduate students will begin by either: observing current lab research in human-robot teaming and public space research, and learning how to replicate that work; or utilizing HRI datasets recorded from prior experiments. The students will then select an aspect of collaboration that is interesting to them and conduct a literature review on research in human-human and human-robot interaction. The students will then design their own study or analysis based on experiment design observed from this literature review and then modify existing autonomous robot code to account for changes to the experiment design. Finally, students will conduct the necessary experiments with participants on campus and document the results for presentation to the lab with an eye for a multi-session follow-up study. This progression will teach them how human-robot interaction experiments are designed, implemented, and refined.

Tightly-coupled collaborative human-robot teams

Project lead: Monica Nicolescu

A PR2 robot in front of a table with a tea pot and cups. Caption text on the screen reads: ROBOT: Please start with the instructions.

Summary: The goal of this project is to investigate the development of control architectures that enable robots to perform complex tasks in tight collaboration with human teammates. In collaborative domains, in addition to being capable of performing a wide range of tasks, a successful robot team member should take actions that are supportive of and that enhance collaborations. Prior research has investigated the problem of coordinating human-robot teamwork in the context of loosely coupled tasks. However, in practical applications, such as construction, household, assistive domains, a wide range of tasks require a tightly-coupled coordination of teammates. This poses significant challenges related to synchronization of the agents' actions as well as the actual task execution, as teammates need to adjust their actions to each other. The problem of coordinated task execution has been widely addressed in the context of teams consisting only of robot systems. While coordination in a multi-robot system can be achieved through direct messaging across the team, the communication between robots and human teammates has to account for the differences in representations and communication that exist between. In addition, current bi-dextrous humanoid robotic platforms such as Baxter and PR2, enable the development of multitasking capabilities, in which the robots may concurrently use their two arms as well as the mobile base (when applicable). In the context of tightly-coupled interactions, enabling a robot to multitask raises challenges related to autonomous allocation and coordination of a robot's own actuators in order to avoid conflicting actions. If the robot is working alongside a human teammate, the coordination and synchronization problems become even more complex as they require the ability to perceive/classify the human's actions as well as having an awareness of the task progress and pace. To address the above challenges, this project will investigate the development of robot coordination skills necessary for effective participation in teamwork in tightly-coupled domains.

In particular, we will address the following problems:

  1. develop single-robot multi-tasking capabilities for bidextrous (and mobile) robots
  2. develop perceptual skills for real-time classification of human behavior and its pace
  3. design architectural control mechanisms for synchronization and adaptation of behavior execution for heterogeneous human-robot systems.

Student involvement: The students will begin by studying related work that describes our supporting research, move on to writing simple controllers using our architecture, then proceed to study robot control systems already developed in our group. After this initial stage, the students will proceed with selecting a relevant aspect of multi-agent coordination from the three problem areas listed above. The newly developed capabilities will then be tested on physical robots (Baxter, PR2) performing tasks that require tightly-coupled coordination together with human teammates. This will provide the students with a solid understanding of robot control practices, as well as the challenges of real-time perception and of controlling multi-agent heterogeneous teams.

Robotic learning for mobile manipulation

Project lead:  Christos Papachristos

Summary: The goal of this project is to develop an efficient learning-based pipeline for the whole-body velocity control of a wheeled robot equipped with a 6-DoF manipulator arm, by relying on a simulated digital-twin environment, as well as transfer and deploy the resulting trained policy on the actual autonomous Mobile Manipulation System hosted at the Robotic Workers Lab of UNR. A common bottleneck for mobile robot path planning that limits their real-time applicability is the significant computational requirements associated with solving high-order multi-objective optimization problems. The bane of computationally simpler reactive approaches for robot navigation and arm velocity-space control such as potential field-driven schemes is the inability to predict and avoid local minima. Especially in the case of high-dimensional problems involving Mobile Manipulation Systems, a number of challenges have to be factored in, such as motions that can lead to near-kinematic-singularity regions, and others.

For Mobile Manipulation Systems to become deployable across the widest possible range of applications, they will have to exhibit the capacity to act super-responsively across a number of operating scenarios. A Reinforcement-Learned pipeline can be integrated with the robot to provide real-time action adaptations against a continuously evolving mobile manipulation objective that requires the coordinated motion of the whole robot body, wheeled base and manipulator arm alike. The technical challenges related with setting up an efficient Reinforcement Learning scheme to operate with a simulated digital twin of a real autonomous Mobile Manipulation System, synthesizing and evaluating the appropriate structure for a promising policy, and successfully training the associated module to perform local collision-free robot navigation and arm motion while at the same time avoiding kinematic singularities while pursuing a dynamic goal, have to be addressed. The resulting policy should be transferable and verifiable on the real-world analogue of the simulated system, in a sequence of mockup experiments.

Student InvolvementREU students will be introduced to our Mobile Manipulation pipeline, while observing currently ongoing research work and replicating the corresponding functionalities that relate to body-and-arm velocity space control and state estimation. The students will begin tailoring the learning pipeline and investigating the effectiveness of their proposed approaches. Progress will be tracked by systematic discussions and reporting meetings. The eventual product of this work will be evaluated and presented in simulation studies. Students will be mentored on transferring the corresponding module to the real system, and they will conduct mockup experiments to evaluate and validate their pipeline’s real-world applicability. Reporting and documentation of the project will be provided, as well as open-sourcing of the final development.

Understanding user comfort through interactions

Project lead: Emily Hand

Summary: The goal of this project is to gain an understanding of user comfort in HRI settings through interactions. Specifically, students will investigate three types of interactions: natural language, facial expressions and body language. In HRI settings, it is essential for a robot to be able to react to its environment, and this case to the human with which it is interacting. To make the interactions more natural, the robot needs to be able to respond to the user's verbal and nonverbal social cues and adjust its behavior accordingly. Students will implement lightweight, explainable machine learning models for these problems. It is essential that these models be explainable as the robots will be deployed in real-world settings and when errors are made it is necessary to be able to pinpoint the problem and correct it.

Student involvement: This research will provide students the opportunity to learn about state-of-the-art machine learning models as well as bias and fairness and explainability in machine learning. They will be able to collect and label data, train models and deploy and evaluate machine learning systems. They will be involved in every aspect of the ML pipeline.

Network and security management of heterogeneous robotics devices in a smart city

Project leadShamik Sengupta

SummaryWith the advent of Smart City concepts, modern robotics exhibit deployment of not just single robots but rather multiple and heterogeneous robotics devices networked with each other to complete various mission-critical operations. Efficient and secure wireless communication is of paramount importance in such networked robotics environment. This project will focus on deployment of one and then multiple UAVs in a smart city environment to provide coverage to the smart city population as needed. Moreover, this project will also analyze the security of robot deployment in such environments.

Student involvementStudents will develop the network-enabled robotics devices and set of radio modules for spectrum handover, spectrum adaptation, synchronization, and channel agility with the project lead and the student research team. They will assist with the experimental evaluation, and analysis describing the results achieved. This is a first-hand experience on wireless communication with robotics systems, and will teach them how to design and develop modules for the mission-centric networks in a dynamic manner, and it will endow them with writing skills necessary for scientific publications.