I'm a research engineer focused on interpretability and ML infrastructure. Currently finishing my MSCS at Georgia Tech (ML & Computing Systems) while doing alignment research with SPAR and MARS. Previously a Senior SWE & Team Lead for 7 years. My work spans mechanistic interpretability, GPU systems, and building tooling that makes safety research possible at scale.
Currently doing an MSCS at Georgia Tech with a focus on ML
- Seattle
- tylercrosse.com
- @tyler_crosse
Pinned Loading
-
mars-attacks
mars-attacks PublicDynamic Attack Selection in Agentic AI Control Evaluations
Python 1
-
LLM_morality
LLM_morality PublicForked from liza-tennant/LLM_morality
Building off of Liza Tennant's Moral Alignment for LLM Agents
Python
-
goal-drift-gym
goal-drift-gym PublicHarness for measuring and analyzing goal drift in multi-step decision-making agents.
Python
-
goal-drift-evals
goal-drift-evals PublicForked from RaunoArike/goal-drift-evals
Fork of Rauno Arike's work
Python
-
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.




