(2016-present) Learning driving style preference for autonomous cars.

This work concerns personalization of driving style. Consider a scenario where you are trying to avoid an obstacle. If the obstacle is moving biker you may want to avoid her while also assuring the biker that you will not hit her. This assurance could be a smile or an eye contact or it could be implicit in the way you maneuver your car around the biker. You definitely don't have to give such assurance to an inanimate obstacle. Some drivers may slow down to avoid hitting a car and others may speed up and overtake. The way you would drive around an obstacle or you would prefer your autonomous car to drive around an obstacle is an example of driving style. In a bid to learn what driving style would people prefer their autonomous cars to adopt, I first questioned the conventional assumption that an autonomous car should just adopt it's user's driving style. In collaboration with Dr. Anca Dragan from UC Berkeley and Qian Yang, HCII, Carnegie Mellon University, I conducted user studies in a driving simulator to understand the relationship between user preference and their own driving style. Our findings show that people do not necessarily prefer their autonomous car to drive like them. For more details please refer to our HRI 2017 paper. Traditionally self-driving cars have learnt to drive from human demonstrations. Furthermore, the latest self-driving car technology that uses Deep CNN to map forward facing camera images to steering angles relies on observation of human driving. Now, we cannot just rely on conventional Learning from Demonstrations for driving style personalization! This presents an interesting challenge to style personalization and I am excited about this new challenge. The solution can be adapted for other preference learning problems where conventional LfD proves inadequate.

Related article

(2015 - 2016) Esprit de Corps en route

No, I am not funded by a French agency, nor am I a native French speaker. Nonetheless, I learnt a new phrase in a non-native non-English language today to summarize my research on human-autonomous vehicle collaboration on road. My research assumes that a joint human-machine cognition framework will pave the way for a smooth transition from semi-autonomy to full autonomy in transportation systems. The framework extends to multi-machine and multi-human collaboration, to accommodate several self-driving cars and semi-autonomous cars that we will see on the roads in the near future. Some of the problems that I particularly find relevant as well as challenging are: collaborative streetscape understanding, developing human-driving population models and using mutual trust for smooth ceding of control between human and car agents. I formalized the latter problem in a survey paper which I presented at AAAI Spring Symposium (March 21-23) in Palo Alto.

(2014 - 2015) Privacy preserving Bluetooth Localization

Bluetooth is increasingly used for indoor localization due to lower range and hence noise than WiFi signals. Such location information could be symbolic in terms of proximity relative to a device or a person with known location or a service seeking entity. It could also be the absolute coordinate position or relative coordinate position on the map of a building. I am implementing Bluetooth localization for efficient human arrival time estimation in a Human-Robot rendezvous problem. People do not like to be tracked. Our short survey at the Gates Hillman Center in Carnegie Mellon University says the same. So is it possible to accomplish the tracking without securing any personally identifiable information from the phone or potentially identifiable information like MAC address? I am developing a method for peer-to-peer localization where a network of phones can identify you by your device name. What if people are not happy about disclosing or changing their device names for localization purpose and the process latency is too high for reliable proximity detection? As a second alternative I am trying to exploit several features extracted from the inquiry scan packets of Bluetooth device recorded by a sniffer. 

(2013 - 2014) Dictionary learning and sparse coding for occupancy estimation

Most modern temperature control systems are designed to consider maximum occupancy count afforded by each room. This works well for offices which have less than five occupants, but for large shared spaces like conference room or classrooms, without a designated owner, the actual occupancy can vary significantly. Rooms are scheduled for meetings and classes assuming attendance of all registered participants resulting in inefficient space scheduling and usage. Setting a wrong occupancy based temperature has a negative effect on either energy usage or occupant comfort. Recent research has shown that 42% of the annual energy of building can be saved with knowledge of fine-grained occupancy. In this project, conducted in collaboration with Professor Anind Dey from Carnegie Mellon University and Dr. Kamalika Das from NASA Ames Research Center, I developed a sparse non-negative matrix factorization (SNMF) based prediction algorithm for occupant count from single sensor carbon dioxide readings.

(2011 - 2013) Sensor-based predictive modeling for intelligent lighting in grid-integrated buildings

As part of Internet-of-Things and Smart Lighting project, we developed a low cost wireless sensor enabled intelligent office lighting system for future grid-integrated buildings. Closed-looped intelligent lighting control relies on dense sensing. Dense sensing can be avoided by optimal sensor deployment that takes advantage of the spatial correlation of light distribution. The spatial correlations are encoded in models which are piecewise linear predictions of indoor light discretized by clustering for sky conditions and sun positions. We call these models virtual sensors. For more information refer to our publication in IEEE Sensors Journal

Some of my shorter projects geared towards human activity detection and modeling include Kinect-based posture correction with voice-feedback and motion and proxemics detection using thermal array sensor. In the latter project I used correlation between background subtracted 64 infra-red sensor pixels to infer motion near the doorway of a room. 

Many of the codes developed for the above projects can be found in my git repository.