In order to advance robot learning, I have done research on machine learning methods and integrated systems that enable embodied social agents to cooperate with people and communicate with them over multiple modalities. My work in machine learning focuses on training neural networks to optimize what users actually care about [NeurIPS'22, CVPR'19]. I have developed algorithms and systems for learning social context [RA-L'22] in the domain of social navigation [Arxiv]. To align user's desired goals to robot behaviors, I have also worked on methods of collecting human feedback about robot behaviors at scale [IROS'21] and incorporated modalities beyond ground-plane motion [HRI'23]. I envision a future where general-purpose robotic platforms are common-place and can seamlessly learn to work collaboratively with nearby humans in any social situation.
Currently, I am a PhD student at Yale University in robotics and a member of the Interactive Machines Group advised by Marynel Vázquez. Previously, I have done research at Stanford University in the Stanford Vision and Learning Lab under Silvio Savarese and have worked on machine learning and data engineering at Sequoia. For fun, I enjoy designing hardware and embedded systems programming.