Created
July 24, 2016 10:21
-
-
Save muupan/66b42e3a3f755b5c35d3419276c1008e to your computer and use it in GitHub Desktop.
ICML2016 reinforcement-learning-related papers
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Guided Cost Learning: Deep Inverse Optimal Control via Policy Optimization | |
Doubly Robust Off-policy Value Evaluation for Reinforcement Learning | |
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning | |
Learning Simple Algorithms from Examples | |
Stability of Controllers for Gaussian Process Forward Models | |
Smooth Imitation Learning for Online Sequence Prediction | |
On the Analysis of Complex Backup Strategies in Monte Carlo Tree Search | |
Benchmarking Deep Reinforcement Learning for Continuous Control | |
Cumulative Prospect Theory Meets Reinforcement Learning: Prediction and Control | |
Why Most Decisions Are Easy in Tetris—And Perhaps in Other Sequential Decision Problems, As Well | |
Opponent Modeling in Deep Reinforcement Learning | |
Softened Approximate Policy Iteration for Markov Games | |
Graying the black box: Understanding DQNs | |
Asynchronous Methods for Deep Reinforcement Learning | |
Dueling Network Architectures for Deep Reinforcement Learning | |
Differentially Private Policy Evaluation | |
Data-Efficient Off-Policy Policy Evaluation for Reinforcement Learning | |
Hierarchical Decision Making In Electricity Grid Management | |
Generalization and Exploration via Randomized Value Functions | |
Model-Free Imitation Learning with Policy Optimization | |
Control of Memory, Active Perception, and Action in Minecraft | |
Continuous Deep Q-Learning with Model-based Acceleration | |
Near Optimal Behavior via Approximate State Abstraction | |
Model-Free Trajectory Optimization for Reinforcement Learning |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment