Research

Peer tutoring, in which one student teaches material to another student, has been shown to increase learning. In particular, there can be significant learning gains for the person taking on the role of the teacher if they can engage in reflective knowledge building as they tutor. Unfortunately, it is difficult to find other students who can play this role and it is likely that students would tend not to believe it if adults played the role. We propose that a suitable social robot could believably play the role of ignorant learner, while in reality it would already understand the concept being taught and so could subtly guide the student teacher with its behaviors and modes of interaction (often referred to as “back leading”).

This work is being done under Dr. Reid Simmons at Carnegie Mellon University. See our project website Multi-Modal Communications for Teachable Robots. All work is funded by NSF.

Related Publications

[Conference] R. Kaushik and R. Simmons, “Perception of Emotion in Torso and Arm Movements on Humanoid Robot Quori,” in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021.

How should two bodies of different morphology move to imitate one another? While exact replication of activity is not possible, particularly when one system has many fewer degrees-of-freedom than the other, we can define perceptual imitation to be achieved when human viewers see the same activity between the two bodies. Even between humans, our differing mobility and limb lengths create differences in the execution of a task. Yet, even with these differences, we imitate each other in many situations including children learning newmovements from the adults around them, people in exercise classes following the movements of an instructor, or pedestrians taking cues from the people around them on the sidewalk. Moreover, simple cartoon characters and robots are frequently seen as “doing the same thing” as natural counterparts. These examples show that this perceptual imitation is possible and common.


Take a look at this short animation and compare the movement of these two "Broom" robots. The character on the left is mirroring the "verticality" or leaning of the spine of the human motion capture skeleton in the center. The robot on the right is imitating one arm of the human skeleton. My work on this project included understanding how these non-humanoid characters with very low-DOF can perceptually imitate this higher-DOF skeleton, which can have ramifications on understanding the mode of imitation between two highly different moving bodies.

This work was done under Dr. Amy LaViers at the University of Illinois at Urbana-Champaign. All work was funded by DARPA except the MOCO 2020 work sponspered by Siemens.

Related Publications

[Conference] R. Kaushik, A. K. Mishra, and A. LaViers, “Feasible Stylized Motion: Robotic Manipulator Imitation of a Human Demonstration with Collision Avoidance and Style Parameters in Increasingly Cluttered Environments,” in Proceedings of the 7th International Conference on Movement and Computing, 2020, pp. 1–8.

[Journal] R. Kaushik and A. LaViers, Imitation of Human Motion by Low Degree-of-Freedom Simulated Robots and Human Preference for Mappings Driven by Spinal, Arm, and Leg Activity, International Journal of Social Robotics. 2019

[Thesis] R. Kaushik. Developing and evaluating a model for human motion to facilitate low degree-of-freedom robot imitation of human movement. Masters Thesis. 2019.

[Conference] R. Kaushik, A. K. Mishra, and A. LaViers, “Feasible Stylized Motion: Robotic Manipulator Imitation of a Human Demonstration with Collision Avoidance and Style Parameters in Increasingly Cluttered Environments,” in Proceedings of the 7th International Conference on Movement and Computing, 2020, pp. 1–8.

[Conference] R. Kaushik and A. LaViers, Using verticality to classify motion: analysis of two Indian classical dance styles, presented at the AISB 2019 Symposium on “Movement that Shapes Behaviour”.

[Conference] R. Kaushik and A. LaViers, “Imitating Human Movement Using a Measure of Verticality to Animate Low Degree-of-Freedom Non-humanoid Virtual Characters,” in Social Robotics, vol. 11357, S. S. Ge, J.-J. Cabibihan, M. A. Salichs, E. Broadbent, H. He, A. R. Wagner, and Á. Castro-González, Eds. Cham: Springer International Publishing, 2018, pp. 588–598.

[Conference] R. Kaushik, I. Vidrin, and A. LaViers, “Quantifying Coordination in Human Dyads via a Measure of Verticality,” in Proceedings of the 5th International Conference on Movement and Computing - MOCO ’18, Genoa, Italy, 2018, pp. 1–8.

This work presents a novel geometric framework for intuitively encoding and learning a wide range of trajectory-based skills from human demonstrations. Our approach identifies and extracts the main characteristics of the demonstrated skill, which are spatial correlations across different demonstrations. Using the extracted characteristics, the proposed approach generates a continuous representation of the skill based on the concept of canal surfaces.


This work was done under Dr. Sonia Chernova at Georgia Tech, with funding provided by the SURE Robotics (Research Experience for Undergraduates) Program.

Related Publications

[Conference] S. R. Ahmadzadeh, R. Kaushik , and S. Chernova, “Trajectory learning from demonstration with canal surfaces: A parameter-free approach,” in 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), Cancun, Mexico, 2016, pp. 544–549.