I'm a researcher and engineer with a strong foundation spanning machine learning, high-performance computing, and full-stack development.
My research focuses on generative modeling — I contributed to OmniFlow, a multi-modal rectified flow model accepted to CVPR 2025. I'm also interested in LLM evaluation, probabilistic inference, and parallel computing.
When I'm not in research mode, I build software products and explore ideas at the intersection of AI and developer tooling.
Any-to-any multi-modal generative model supporting text, image, and audio. Accepted to CVPR 2025.
AI-powered academic peer review benchmarking using LLM-as-a-judge evaluation on ICML OpenReview data.
Full-stack platform for indie makers. Go/Echo backend, React frontend, PostgreSQL, deployed on GKE.
Multi-model pipeline for automated analysis and critique of research papers.
Deep dive into global memory access patterns and how coalescing affects throughput on modern NVIDIA GPUs.
Using language models to evaluate language models introduces systematic biases that most papers ignore.
Walkthrough of the rectified flow framework and why it produces straighter ODE trajectories than DDPM.