Sensing & Reasoning
AI for high-stakes human systems.
How do we build AI that operates reliably in high-stakes environments?
I build AI systems for environments where mistakes matter. I usually split the problem into two parts. First, extract a clean state description from messy sensors—video, audio, IMU, networks. Then reason over that state to explain what is happening, predict what will happen under interventions, and decide when the system should act. The core issue is not just accuracy. It is whether the system knows when its output is solid and when it is out of its depth.
Video, audio, physiological,
LiDAR, text, RF
Causal graphs, social dynamics,
behavioral patterns
Knowing when the system is out of its depth
Justified high-stakes output
I study this across three domains: urban AI (causal discovery, privacy-preserving sensing), social reasoning in human-machine systems (robotics, driver attention), and multimodal health sensing (behavioral biomarkers, spatial intelligence). Each domain exposes different aspects of the reliability problem.
I direct the Sensing & Reasoning Lab at Rutgers University, serve as Rutgers Site Director for CRAIG (NSF IUCRC on Responsible AI & Governance), and as Rutgers Site Lead for CS3 (NSF ERC for Smart Streetscapes). Previously, I was at IBM Research and several startups. PhD from UC Berkeley, BS from MIT.
My lab builds competency-aware AI that can explain when its conclusions hold, in cities and health settings where reliability is not optional.
Urban environments generate unprecedented sensor data, but data alone cannot distinguish correlation from causation. Decision-makers need causal understanding to predict intervention effects.
How can robots understand social contexts when social reasoning requires common sense that's difficult to explicitly program? How do we know when social inferences are reliable enough to act on?
Instrumented spaces generate diverse sensor data. Spatial intelligence requires extracting what a space affords while handling distribution shift—similar events producing different readings across time and location.
Sensing & reasoning at different scales—from governance frameworks to city infrastructure to individual interactions.
Role — Rutgers Site Director
When should AI systems act? Developing theoretical foundations and evaluation pipelines for AI that knows its boundaries.
Role — Rutgers Site Lead
City-scale sensing and causal reasoning for pedestrian safety. Real-world deployment of urban AI systems.
Role — Director
Core research on perception, causal inference, and self-assessment. The intellectual engine for all three research pillars.
Role — Research Analyst
Sensing & reasoning applied to baseball: biomechanics from video, strategy analysis, performance modeling.
The long-term goal is AI systems that discover causal, social, and spatial structure, know where that structure applies, and can communicate those boundaries clearly to humans and organizations that rely on them.
This vision shapes my research portfolio over the next 5-10 years through CRAIG (responsible AI governance), CS3 (urban deployment), and NIH CAMERA (health sensing)—three funded initiatives where reliable AI is central.
Interviewed about AI capabilities and the hype around superintelligence
Coverage of our CoRL 2022 work on bite timing prediction for robot-assisted feeding in group dining settings
Selected courses
Comprehensive introduction to programming in C and C++ for ECE students, covering foundational concepts through advanced topics including object-oriented programming, modern C++ features, and multithreading.
In-depth exploration of multimodal learning across audio, video, time series, and language/text data, focusing on foundational concepts, advanced techniques, and practical applications for distributed sensing systems.