Abstract:
Although autonomous agents are inherently built for a myriad of applications, their perceptual systems are designed with a single line-of-thought - perception delivers a 3D representation of the scene. The utilization of these traditional methods on most autonomous agents (like drones) is highly inefficient as these algorithms are generic and not parsimonious. In stark contrast, the perceptual systems in biological beings have evolved to be highly efficient based on their natural habitat as well as their day-to-day tasks.
We draw inspiration from nature to build a minimalist cognitive framework for robots at scales that were never thought possible before. We propose a novel Parsimonious AI framework for mobile robots to solve a class of perception and navigational tasks like traversing through dynamic unstructured environments and segmenting never-seen objects. We utilize the fact that computing is only a small aspect of a robot. We re-imagine the robot from the ground-up based on a class of tasks to be accomplished. This leads to a set of tight constraints that aid in efficiently solving the problem of autonomy.
Bio:
Chahat Deep Singh is a Ph.D. student in the Perception and Robotics Group (PRG) with Professor Yiannis Aloimonos and Associate Research Scientist Cornelia Fermüller. He completed his Masters in Robotics at the University of Maryland in 2018. Later, he joined as a Ph.D. student in the department of Computer Science. Singh’s research focuses on developing bio-inspired minimalist cognitive architectures for mobile robot autonomy. He works on computationally efficient and bio-inspired algorithms with an emphasis on computer vision. He was awarded Ann G. Wylie Fellowship for the year 2022-2023, Future Faculty Fellowship 2022-2023 and UMD's Dean Fellowship in 2020. He has been serving as the Maryland Robotics Center Student Ambassador since 2021. His works have been featured in IEEE-Spectrum, NVIDIA, Voice of America, Futurism, Mashable and many more.
About Me
I am a third year Ph.D. student and a Dean's Fellow in the Perception & Robotics Group (PRG) at University of Maryland, College Park (UMD), advised by Prof. Yiannis Aloimonos and Dr. Cornelia Fermuller. PRG is associated with the University of Maryland Institute of Advanced Computer Science Studies (UMIACS) and Autonomy, Robotics and Cognition Lab (ARC).
Interests: Active perception and deep learning applications to boost multi-robot interaction and navigation ability in aerial robots.
Prior to pursuing Ph.D., I did my Masters in Robotics at UMD where I worked on active behaviour of aerial robots and published GapFlyt where we used motion cues of the aerial agent to detect an unknown-shaped gap using a monocular camera. Apart from research work, I love to teach! I, along with Nitin J. Sanket designed and taught an open-source graduate course: ENAE788M (Hands-On Autonomous Aerial Robotics) at UMD in Fall 2019. In my spare time, I love to capture nature on my camera, especially landscape and wildlife photographs; watch and play competitive video games — Counter Strike and Dota 2.Teaching
This is an advanced graduate course that exposes the students with mathematical foundations of computer vision, planning and control for aerial robots. This course was designed and taught by me and Nitin J. Sanket. The course is designed to balance theory with an application on hardware.
The entire course is open-source! The links to video lectures and projects are given below:
CMSC 733 is an advanced graduate course on classical and deep learning approaches for geometric computer vision and computational photography which explores through image formation, visual features, image segmentation, recognition, motion estimation and 3D point clouds. We redesigned this course to showcase how to model classical 3D geometry problems using Deep Learning!
The entire course is open-source! The link to projects and student outputs are given below:
CMSC 426 is an introductory course on computer vision and computational photography that explores image formation, image features, image segmentation, image stitching, image recognition, motion estimation, and visual SLAM.
The entire course is open-source! The link to projects and student outputs are given below:
Research
Nitin J. Sanket, Chahat Deep Singh, Chethan M. Parameshwara, Cornelia Fermuller, Guido C.H.E. de Croon, Yiannis Aloimonos, Robotics Science and Systems (RSS), 2021.
Nitin J. Sanket, Chahat Deep Singh, Varun Asthana, Cornelia Fermuller, Yiannis Aloimonos, IEEE International Conference on Robotics and Automation (ICRA) , 2021.
Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, IEEE Transactions on Robotics (Under Review), 2020.
Chethan M. Parameshwara, Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos, IEEE International Conference on Robotics and Automation (ICRA), 2021.
Nitin J. Sanket*, Chethan M. Parameshwara*, Chahat Deep Singh, Cornelia Fermuller, Davide Scaramuzza, Yiannis Aloimonos, IEEE International Confernce on Robotics and Automation, Paris, 2020.
* Equal Contribution
Chahat Deep Singh*, Nitin J. Sanket*, Kanishka Ganguly, Cornelia Fermuller, Yiannis Aloimonos, IEEE Robotics and Automation Letters, 2018.
* Equal Contribution
Video Tutorials
Services
Contact Me
A108 Adam Street, New York, NY 535022
contact@example.com
+1 5589 55488 55