email: {lastname}(dot){firstname}001(at)gmail(dot)com
i'm a cs and math double major at uva (class of 2027). i really love solving puzzles of all kinds. i do work across a variety of different fields, everything from typical full-stack work at Forge (where i'm director of engineering) to replicating mech interp papers for my research role at uva.
i've published at emnlp 2024 and aaai 2026, and my current research is on calibrating work on introspective awareness in large language models.
i used to be an avid ctf player and you may find some of my writeups in the archive on my blog and in my github repository graveyard. i still think they're really fun but other things are more interesting to me.
i'll be interning at google in sunnyvale this summer so if you're in the area feel free to send me an email and we can connect and grab a coffee.
- sparse malicious finetuning - how small amounts of malicious supervised fine-tuning (SFT) can flip safety-aligned LMs from refusal to compliance. this result essentially got published in a much more robust form by anthropic while we were working on it, but it was still a fun project.
- spmspmmul - the code and writeup for how we wrote a really fast sparse-matrix multiplication kernel
- zero-shot-realignment - we replicate the results from here and here onto
gemma-4-e4b-itfor the first time.


