Samuel Sokota

Logo

I am a PhD student in the Machine Learning Department at Carnegie Mellon University supervised by Zico Kolter. Previously, I did my master’s at the University of Alberta, where I worked with Marc Lanctot on reinforcement learning in games and with Martha White on model-based reinforcement learning. Before that, I did my undergraduate at Swarthmore College, where I played lacrosse, studied mathematics and physics, and worked with Bryce Wiedenbeck on empirical game theory.

The best way to reach me is at ssokota(at)andrew(dot)cmu(dot)edu.

Publications

Solving Common-Payoff Games with Approximate Policy Iteration
Samuel Sokota,* Edward Lockhart,* Finbarr Timbers, Elnaz Davoodi, Ryan D’Orazio, Neil Burch, Martin Schmid, Michael Bowling, Marc Lanctot
AAAI 2021
[Paper] [Thesis] [Code] [Tiny Hanabi]
Procedure for computing joint policies combining deep dynamic programming and common knowledge approach.

Selective Dyna-style Planning Under Limited Model Capacity
Zaheer Abbas, Samuel Sokota, Erin J. Talvitie, Martha White
ICML 2020
[Paper]
Investigates effects of non-realizability on uncertainty quantification for model-based reinforcement learning.

Simultaneous Prediction Intervals for Patient-Specific Survival Curves
Samuel Sokota,* Ryan D’Orazio,* Khurram Javed, Humza Haider, Russell Greiner
IJCAI 2019
[Paper] [Code]
Heuristic methods for estimating simultaneous prediction intervals from samples.

Learning Deviation Payoffs in Simulation-Based Games
Samuel Sokota, Caleb Ho, Bryce Wiedenbeck
AAAI 2019
[Paper]
Procedure for estimating epsilon-Nash equilibria in large role-symmetric simulation-based games.