I am a PhD student in the Machine Learning Department at Carnegie Mellon University supervised by Zico Kolter. Previously, I did my master’s at the University of Alberta, where I worked with Marc Lanctot on reinforcement learning in games and with Martha White on model-based reinforcement learning. Before that, I did my undergraduate at Swarthmore College, where I played lacrosse, studied mathematics and physics, and worked with Bryce Wiedenbeck on empirical game theory.
The best way to reach me is at
Solving Common-Payoff Games with Approximate Policy Iteration
Samuel Sokota,* Edward Lockhart,* Finbarr Timbers, Elnaz Davoodi, Ryan D’Orazio, Neil Burch, Martin Schmid, Michael Bowling, Marc Lanctot
[Paper] [Thesis] [Code] [Tiny Hanabi]
Procedure for computing joint policies combining deep dynamic programming and common knowledge approach.
Selective Dyna-style Planning Under Limited Model Capacity
Zaheer Abbas, Samuel Sokota, Erin J. Talvitie, Martha White
Investigates effects of non-realizability on uncertainty quantification for model-based reinforcement learning.
Simultaneous Prediction Intervals for Patient-Specific Survival Curves
Samuel Sokota,* Ryan D’Orazio,* Khurram Javed, Humza Haider, Russell Greiner
Heuristic methods for estimating simultaneous prediction intervals from samples.
Learning Deviation Payoffs in Simulation-Based Games
Samuel Sokota, Caleb Ho, Bryce Wiedenbeck
Procedure for estimating epsilon-Nash equilibria in large role-symmetric simulation-based games.