I worked full-time at Apple, completed 28 credits at ETH Zürich (~a full semester), went to gym 4× a week and organized the Zurich OpenAI Robotics Hack (where I fundraised ~$30K in commitments in 1 week). This required working 12–14 hours every single day for 3 months.
I worked on robotics in Oct 2025; it confused me why VLAs use a VLM backbone.
Thus, I started working on world models after my exams. Iterating on robots is very slow
(requires real-world deployment), so I use CS:GO as my environment.
I'm scraping game state (.dem) from FaceIt Pro matches,
rendering gameplay, and training action-conditioned video diffusion /
inverse dynamics / V-JEPA-style models.
Updates on my X.
With Binh, we were the first team in the world to deploy TWIST2 full-body teleop, fine-tuned GR00T N1.5 on full-body data and ran low-latency inference to Unitree G1 (which we got through a LinkedIn post). Learnt how far behind open-source is, how bad teleop is, how poorly VLAs generalize, how bad deployment testing is.
Coded up & trained on a 8×A100 cluster following Karpathy's tutorial during exam revisions. Much more interesting than exam prep.
Built an HTML editor for PMs to localize software update information. Integrated LLMs to automate the localization workflow. During n&w, completed 28 ETH credits, organized the Zurich Robotics Hack with OpenAI, went to gym 4×/week.
Shipped a metric ranking the network risk of every AWS network device. Deployed 3 services each ingesting 1M requests/day; designed for high scalability using DynamoDB, Lambda, SQS/SNS, batching.
At 16, built their iOS app in 2 months. Login/auth, streaming biometrics; helped them raise 100K+ CHF in grants.
Implemented a distributed SpGEMM algorithm in C++ & MPI using Allgather(v), Alltoall(v). Devised a bloom-filter protocol to minimize column-sharing communication — 33% speed-up, beating Eigen in some cases.
Kernel-caching, AVX2 SIMD vectorization, fused multiply-adds, multi-accumulator designs — ~5.5× speed-up over baseline for the linear layer; also a kernel-free approach to reduce memory requirements.
Organized Europe's most talent-dense robotics hack — researchers (Lucas Beyer, Marc Pollefeys, Marco Hutter), frontier companies (OpenAI, Mimic, Loki, Prime Intellect) and exceptional builders (ex-NASA, ex-NVIDIA, ex-DeepMind). Followed up with a simulation hackathon with Nvidia, Flexion Robotics and Jua.ai.
Podcast interviewing exceptional ETH Zürich & EPFL students to normalize ambition — ~1.8K subscribers. Closed a $2.5K partnership from Founderful Campus. Automated blog-writing with LLMs, then stopped posting to focus on building.
Organized the first editions of AI Tinkerers in Zurich (~400 participants from DeepMind, Nvidia, Extropic, NASA, Isomorphic Labs). Stopped when the vibe drifted toward networking vs. building.
1 of 10 grantees selected in 2025.
1 of 30 exceptional technical builders selected to spend summer in a Finnish hacker hotel. Worked on real-time human diffusion models.
Project backlog — projects I'd like to build eventually.
Collect 3D data using two iPhones with genlock; fit a Gaussian Avatar (SMPL-X + Gaussians on vertices); train a diffusion transformer conditioned on audio for face + body motion. Found digital avatars a bit dystopian, so I didn't proceed — but the tech is very cool.
Scale hierarchical 3DGS with a retrieval-optimized "3DGS database" enabling out-of-core streaming (world-scale 3DGS, multi-client serving). DJI & others beat me to it.