The pragmatic test for perception is not benchmark accuracy but operational reliability: does the system still work when inputs are degraded, geometry is extreme, or language is noisy and ambiguous? This direction studies inference across vision, language, and signals as they actually arrive in the field. Recent work: license plate recoverability under extreme angles, critical information extraction from distress communications, and implicit entity recognition in narratives.
The pragmatic test for generative AI is not output quality in the abstract but fitness to a stated purpose: does the molecule meet its detonation target, does the image match the steering preference, does the code satisfy its contract? This direction studies generation where the specification is a hard constraint, not a soft preference. Active projects: domain-gated latent diffusion for energetic materials, interactive diffusion steering, and speculative code generation.
The pragmatic test for multi-agent AI is not protocol elegance but what the collective actually achieves: can agents coordinate without communication, evaluate without ground truth, and reason without shared state? This direction studies coordination and reasoning under the strictest informational constraints. Active projects: zero-knowledge swarm coordination, self-evaluating model ensembles, and collaborative reasoning framework.