Pradipta is a Post Doctoral Research Scholar in the USC Viterbi Department of Computer Science at the University of Southern California. He obtained his PhD in Electrical Engineering from the USC Ming Hsieh Department of Electrical Engineering at the University of Southern California in Aug 2018. He is an MHI Institute Scholar and a USC Provost Fellow. His current research interests are in the areas of Cyber-Physical Systems, Wireless Sensor Networks, Wireless Robotics Networks, Vehicular Networks and Cloud Computing. Pradipta recieved Electrical and Computer Engineering Best Dissertation Award. He is also a member of IEEE and ACM research communities.
Pradipta completed his Bachelor of Engineering in Electronics and Tele-Communication Engineering in 2012, from Jadavpur University, Kolkata, India. His undergraduate research was recognized by UGC Infrastructure Grant for Undergraduate Research.
PhD in Computer Engineering, 2018
Viterbi School of Engineering, University of Southern California
Bachelors in Electronics and Tele-Communication Engineering, 2012
We present Autonomous Rssi based RElative poSitioning and Tracking (ARREST), a new robotic sensing system for tracking and following a moving, RF-emitting object, which we refer to as the Leader, solely based on signal strength information. Our proposed tracking agent, which we refer to as the TrackBot, uses a single rotating, off-the-shelf, directional antenna, novel angle and relative speed estimation algorithms, and Kalman filtering to continually estimate the relative position of the Leader with decimeter level accuracy (which is comparable to a state-of-the-art multiple access point based RF-localization system) and the relative speed of the Leader with accuracy on the order of 1 m/s. The TrackBot feeds the relative position and speed estimates into a Linear Quadratic Gaussian (LQG) controller to generate a set of control outputs to control the orientation and the movement of the TrackBot. We perform an extensive set of real world experiments with a full-fledged prototype to demonstrate that the TrackBot is able to stay within 5m of the Leader with: (1) more than 99% probability in line of sight scenarios, and (2) more than 75% probability in no line of sight scenarios, when it moves 1.8X faster than the Leader
Detecting activities from video taken with a single camera is an active research area for ML-based machine vision. In this paper, we examine the next research frontier: near real-time detection of complex activities spanning multiple (possibly wireless) cameras, a capability applicable to surveillance tasks. We argue that a system for such complex activity detection must employ a hybrid design: one in which rule-based activity detection must complement neural network based detection. Moreover, to be practical, such a system must scale well to multiple cameras and have low end-to-end latency. Caesar, our edge computing based system for complex activity detection, provides an extensible vocabulary of activities to allow users to specify complex actions in terms of spatial and temporal relationships between actors, objects, and activities. Caesar converts these specifications to graphs, efficiently monitors camera feeds, partitions processing between cameras and the edge cluster, retrieves minimal information from cameras, carefully schedules neural network invocation, and efficiently matches specification graphs to the underlying data in order to detect complex activities. Our evaluations show that Caesar can reduce wireless bandwidth, on-board camera memory, and detection latency by an order of magnitude while achieving good precision and recall for all complex activities on a public multi-camera dataset
I have been a teaching assistant for the following courses at University of Southern California: