Select Page
Jon Perreira

Jon Perreira

Senior Director of AI Research and Red Team Engineering & Architecture

Jon Perreira serves as Senior Director of AI Research and Red Team Engineering & Architecture, specializing in adversarial testing, safety evaluation, and analysis of AI systems. His career trajectory reflects a deliberate evolution from experimental science to applied AI safety leadership.

Perreira’s foundational training spans biotechnology, computer science, and engineering. He conducted undergraduate research in neurobiology with an emphasis in the evolutionary mechanisms of animal behavior and advanced research in bacterial and viral host-pathogen ecology, analyzing interaction models leveraging molecular methods to assess real-time gene expression data and analyze resulting infectious disease pathology in animal models.

Following his formal experimental research, Perreira served as a lead expert in STEM based model training, complex inference analysis, and prompt engineering—applying scientific rigor to post-training evaluation and alignment across a diverse range of AI systems and models. 

Today, Perreira applies this multidisciplinary expertise to lead comprehensive red teaming and AI system evaluation initiatives. His technical approach involves designing sophisticated threat models and conducting detailed adversarial analyses to elucidate potential system vulnerabilities. Perreira develops advanced adversarial testing protocols that utilize complex architectural interaction models paired with rigorous statistical analysis and data modeling methods to evaluate model behavior under challenging conditions. His work focuses on systematically testing adversarial resilience, identifying critical failure modes, and implementing scalable safety evaluation frameworks designed specifically for high-stakes AI deployment environments.

See What Jon Perreira Has Written

Beyond Traditional Testing: Advanced Methodologies for Evaluating Modern AI Systems

Beyond Traditional Testing: Advanced Methodologies for Evaluating Modern AI Systems

As AI systems continue to demonstrate ever more complex behaviors and autonomous capabilities, our evaluation methodologies must adapt to match these emergent properties if we are to safely govern these systems without hindering their potential.
No results found.