The Story

The flight path was anything but linear.

Pilot → CPA → Data Scientist → AI Research.

The Beginning

The Cockpit

Early aviation chapter

I got my pilot's license before most people got their first apartment. Private, commercial, instrument-rated — the whole progression, done young and done fast. There's something about flying that rewires how you think. You learn to read instruments under pressure, navigate complex systems in real time, and make decisions when the margin for error is literally zero.

The cockpit taught me that confidence isn't about knowing everything — it's about knowing what matters right now and acting on it. That lesson followed me into everything I've done since.

“Turns out the job description for flying — navigate complex systems, read instruments under pressure, make decisions with zero margin for error — is also the job description for building AI.”
Flight Training

Before AI, There Was the Cockpit

Cockpit flight training
Flying in instrument conditions
Flight training by aircraft

~200 hours in the cockpit taught me how to think under pressure: cross-country flights at 12,000 feet, stall training, and night flying where your instruments matter more than intuition.

That’s where System 1 vs. System 2 became practical, not theoretical — recognize fast when instinct is useful, then deliberately switch to checklist-driven reasoning when the stakes are high.

Different domain, same operating system I use in AI today: trust instrumentation, run procedure, and design for edge cases before they become failures.

2011 — 2015

Purdue / Krannert

Purdue and Krannert

Purdue's Krannert School of Management taught me to break business problems into testable pieces and make decisions with data, not vibes. Case competitions were training — solve fast, defend your answer. Won a Boeing case. That moment made it clear: I didn't want to analyze systems. I wanted to build them.

2015 — 2018

Crowe → EY

Crowe to EY chapter

I started by auditing financial statements at Crowe. The work was rigorous and high-accountability — and also repetitive enough that I kept asking how much of it could be systematized.

That question pushed me toward automation and into EY as a data engineer, where I built data pipelines that reduced manual work. This was before "data science" was a thing, but the pattern was already there: instrument the process, automate where you can measure quality.

2018 — 2021

Uber — ML at Global Scale

Uber chapter

I joined Uber at one of the most intense moments in the company's history. The London license was threatened. Regulatory pressure was coming from every direction. The company was fighting for its survival on multiple fronts simultaneously, and the machine learning systems had to work — not in theory, not in a notebook, but in production at global scale.

That's where I learned what it really means to ship models that matter. Not models that look good in a presentation — models that have to make real decisions for millions of people, every day. The gap between "ML that works on your laptop" and "ML that works at Uber scale" is enormous, and closing it changed how I think about everything.

“The gap between ML that works on your laptop and ML that works at Uber scale is enormous. Closing it changed how I think about everything.”
2019 — 2021

Master’s in Data Science @ UC Berkeley

UC Berkeley chapter

I did a Master's in Data Science at UC Berkeley while working full-time. I was interested in deep learning and how to scale data systems. One project was detoxification — trying to build a model that could rewrite toxic text without losing the meaning.

ASD Capstone

Applied Data Science Capstone

Berkeley MIDS graduation

In Berkeley’s AI-focused Master’s in Data Science program, I strengthened the full stack: machine learning foundations, experimentation design, and production-grade model evaluation.

For the capstone, I built and presented an end-to-end applied AI project that went from problem framing to model iteration to decision-ready outputs under real-world constraints.

What I learned here still drives my approach: connect research to product outcomes, make quality measurable, and design systems teams can actually ship and trust.

Meta — Messenger

Data Scientist @ Messenger

Meta Messenger chapter

At Messenger, I shipped ML products to billions. The throughline was high-quality 0→1 launches that combined technical depth, product judgment, and cross-functional execution.

By 2024, I was leading MetaAI’s largest Messenger update — from Tab to Search integration and beyond.

Selected highlights

2022: Launched calling on the Facebook App, growing to 11M+ monthly active calling users.

2023: Launched Messenger in Virtual Reality for Meta Quest.

2023: Launched AI Stickers and /imagine, bringing generative creativity into everyday conversations.

2023: Launched MetaAI and AI Characters, the first LLM-powered experiences in Messenger.

2024: Launched MetaAI’s largest feature bundle update in Messenger, including MetaAI Tab and MetaAI in Search.

Meta — Superintelligence Labs

Research Data Scientist @ Meta Superintelligence Labs

Technical Guru award front
Technical Guru award back

At Meta Superintelligence Labs, I moved to the bleeding edge: foundation-model quality, data curation, and evaluation systems — same cockpit discipline, just at model scale.

Selected highlights

2024: Supported the launch of Llama 3 through RLHF and SFT data evaluation; defined and measured annotation quality, analyzed correlations with reward-model scores, and contributed to the official publication. Read the paper.

2024: Established the QA and data quality framework for Llama 4 post-training across Coding, Reasoning, Multilingual, Multimodal, and Voice domains, and operationalized scalable validation with annotation vendors.

2024: Enhanced Llama reward-model performance for Coding by redefining annotation standards and conducting deep analyses of vendor data pipelines.

2025: Led Llama 4 post-training data decontamination, improving spend efficiency and model reliability by identifying and removing test-set-similar data using SONAR embeddings.

2025: Built the data foundation for the next iteration of MovieGen, Meta’s video generation model, leveraging ViCLIP and the Perception Encoder to define and operationalize video data quality. Partnered across research and product teams to steer data curation toward balanced, high-quality pre-training datasets. Read the paper.

Now

Softmax

Most AI writing is either too academic or too shallow. I started Softmax for people who build real things. Strategy, systems, what actually works — written by someone who's in the trenches. Every week.

“AI strategy for people who build real things. No hype, no fluff — just what matters.”
The Newsletter

AI, piloted.

Softmax: AI strategy for people who build real things. No hype, no fluff — just what matters. Every week.

Subscribe free →

Connect