Hi! I'm Preston, a third-year undergrad at UC Berkeley. I am also a researcher at Berkeley Artificial Intelligence Research advised by Sergey Levine. Previously, I've been advised by Dan Klein, Trevor Darrell, and Alexandre Bayen.
I'm interested in building intelligent systems that can efficiently learn general skills.

News
- Sep 2025: New paper on compute-optimal scaling for value-based RL is out!
- May 2025: I'm an Accel Scholar!
- May 2025: New paper on scaling laws for value-based RL is accepted to ICML!
Research
Compute-Optimal Scaling for Value-Based Deep RL Sep 2025
Preston Fu, Oleh Rybkin, Zhiyuan Zhou. Michal Nauman, Pieter Abbeel, Sergey Levine, Aviral Kumar
We analyze the interplay of model size, UTD, and batch size. For small models, reducing the training error by increasing the batch size worsens generalization; we trace this effect to poor-quality TD-targets. Leveraging this, we fit the best batch size and derive and verify the budget-optimal data-compute tradeoff.

Value-Based Deep RL Scales Predictably May 2025
Oleh Rybkin, Michal Nauman, Preston Fu, Charlie Snell, Pieter Abbeel, Sergey Levine, Aviral Kumar
International Conference on Machine Learning (ICML), 2025
ICLR Robot Learning Workshop, 2025 (oral presentation)
We build empirical models of the data-compute Pareto frontier, optimal resource allocation across data and compute, and hyperparameter dependencies for value-based RL. From small-scale runs, we can extrapolate towards higher data, compute, and performance.

Reasoning and Tools for Human-Level Forecasting Aug 2024
Elvis Hsieh*, Preston Fu*, Jonathan Chen*
NeurIPS Math-AI Workshop, 2024
EMNLP Future Event Detection Workshop, 2024 (oral presentation)
We propose a framework of reasoning-and-acting (ReAct) agents that can retrieve information and run numerical simulations, for competitive forecasting platforms. We achieve performance on par with human competitors.
