CS6700 - Reinforcement learning

Course Data :

The Reinforcement Learning problem : evaluative feedback, non-associative learning, Rewards and returns, Markov Decision Processes, Value functions, optimality and approximation. Dynamic programming : value iteration, policy iteration, asynchronous DP, generalized policy iteration. Monta-Carlo methods : policy evaluation, roll outs, on policy and off policy learning, importance sampling. Temporal Difference learning : TD prediction, Optimality of TD(0), SARSA, Q-learning, R-learning, Games and after states. Eligibility traces : n-step TD prediction, TD (lambda), forward and backward views, Q (lambda), SARSA (lambda), replacing traces and accumulating traces. Function Approximation : Value prediction, gradient descent methods, linear function approximation, ANN based function approximation, lazy learning, instability issues Policy Gradient methods : non-associative learning – REINFORCE algorithm, exact gradient methods, estimating gradients, approximate policy gradient algorithms, actor-critic methods

Note : The pre-req was updated to MA2040 from Jul 2018 offering onwards.

Pre-Requisites

  • MA2040
  • None

Parameters

Credits Type Date of Introduction
3-1-0-4 Elective Aug 2007

Previous Instances of the Course


© 2016 - All Rights Reserved - Dept of CSE, IIT Madras
Website Credits