Sensing & Reasoning Lab · Rutgers University
Jorge Ortiz

Jorge Ortiz

Sensing & Reasoning

AI for high-stakes human systems.

Associate Professor, Rutgers Director, Sensing & Reasoning Lab
Rutgers Site Director, CRAIG (NSF IUCRC) Rutgers Site Lead, CS3 (NSF ERC) Research Analyst, New York Yankees

About

How do we build AI that operates reliably in high-stakes environments?

I build AI systems for environments where mistakes matter. I usually split the problem into two parts. First, extract a clean state description from messy sensors—video, audio, IMU, networks. Then reason over that state to explain what is happening, predict what will happen under interventions, and decide when the system should act. The core issue is not just accuracy. It is whether the system knows when its output is solid and when it is out of its depth.

Sensors

Video, audio, physiological,
LiDAR, text, RF

Causal and Social Structure

Causal graphs, social dynamics,
behavioral patterns

Competency and Reliability Checks

Knowing when the system is out of its depth

Human-in-the-Loop Decision and Oversight

Justified high-stakes output

I study this across three domains: urban AI (causal discovery, privacy-preserving sensing), social reasoning in human-machine systems (robotics, driver attention), and multimodal health sensing (behavioral biomarkers, spatial intelligence). Each domain exposes different aspects of the reliability problem.

I direct the Sensing & Reasoning Lab at Rutgers University, serve as Rutgers Site Director for CRAIG (NSF IUCRC on Responsible AI & Governance), and as Rutgers Site Lead for CS3 (NSF ERC for Smart Streetscapes). Previously, I was at IBM Research and several startups. PhD from UC Berkeley, BS from MIT.

External engagement: Research Analyst for the New York Yankees, applying computer vision to baseball operations.

Research Program

My lab builds competency-aware AI that can explain when its conclusions hold, in cities and health settings where reliability is not optional.

Urban AI

Urban environments generate unprecedented sensor data, but data alone cannot distinguish correlation from causation. Decision-makers need causal understanding to predict intervention effects.

Systems: TeLLMe (causal discovery with self-confidence assessment), CityOS (privacy-preserving urban sensing with formal guarantees)
Deployed across CS3 testbeds in NYC, Rutgers, and West Palm Beach

Social Reasoning in Human-Machine Systems

How can robots understand social contexts when social reasoning requires common sense that's difficult to explicitly program? How do we know when social inferences are reliable enough to act on?

Systems: VLM-based social HRI (interpreting nonverbal cues), SoNNET (bite timing in robot-assisted feeding), Project Paz (driver attention and interruption timing)
Published at CoRL, IMWUT; collaboration with Cornell on caregiving robotics

Multimodal Spatial Reasoning

Instrumented spaces generate diverse sensor data. Spatial intelligence requires extracting what a space affords while handling distribution shift—similar events producing different readings across time and location.

Systems: DFGauss (3D occupancy prediction, NeurIPS 2025), Maestro (18-channel multisensor platform), CAMERA (NIH BRAIN Initiative, behavioral biomarkers)
$5M NIH funding; deployed across multiple campus buildings

Centers & Initiatives

Sensing & reasoning at different scales—from governance frameworks to city infrastructure to individual interactions.

NSF IUCRC · GOVERNANCE

CRAIG — Responsible AI & Governance

Role — Rutgers Site Director

When should AI systems act? Developing theoretical foundations and evaluation pipelines for AI that knows its boundaries.

NSF ERC · URBAN SCALE

CS3 — Smart Streetscapes

Role — Rutgers Site Lead

City-scale sensing and causal reasoning for pedestrian safety. Real-world deployment of urban AI systems.

RESEARCH LAB · CORE

Sensing & Reasoning Lab

Role — Director

Core research on perception, causal inference, and self-assessment. The intellectual engine for all three research pillars.

APPLIED · SPORTS

New York Yankees

Role — Research Analyst

Sensing & reasoning applied to baseball: biomechanics from video, strategy analysis, performance modeling.

Selected Publications

Full list on Google Scholar.

Future Directions

The long-term goal is AI systems that discover causal, social, and spatial structure, know where that structure applies, and can communicate those boundaries clearly to humans and organizations that rely on them.

01
Formal frameworks for representing when AI conclusions are reliable across diverse task spaces
02
Methods for systems to learn their own boundaries from experience and feedback
03
Interfaces that make these boundaries interpretable to stakeholders deciding when to rely on AI

This vision shapes my research portfolio over the next 5-10 years through CRAIG (responsible AI governance), CS3 (urban deployment), and NIH CAMERA (health sensing)—three funded initiatives where reliable AI is central.

News

Jan '26
New preprint: "Explicit Abstention Knobs for Predictable Reliability in Video Question Answering" — selective prediction for reliable VLMs.
2025
Appointed Rutgers Site Director for CRAIG (NSF IUCRC on Responsible AI & Governance).
2025
Rutgers Site Lead for CS3 (NSF ERC for Smart Streetscapes).
Sep '25
Three NeurIPS 2025 papers accepted — one main conference (DFGauss), two workshops (TellMe Why, PolicyGrid).

Media & Press

WALL STREET JOURNAL

The Overestimation of Artificial Superintelligence

Interviewed about AI capabilities and the hype around superintelligence

NEW SCIENTIST

Robot that learns social cues could feed people with tetraplegia

Coverage of our CoRL 2022 work on bite timing prediction for robot-assisted feeding in group dining settings

July 2022

Teaching

Selected courses

ECE 252 — Programming Methodology

Comprehensive introduction to programming in C and C++ for ECE students, covering foundational concepts through advanced topics including object-oriented programming, modern C++ features, and multithreading.

ECE 532 — Multimodal Learning for Sensing Systems

In-depth exploration of multimodal learning across audio, video, time series, and language/text data, focusing on foundational concepts, advanced techniques, and practical applications for distributed sensing systems.

Contact