author = {Vaibhav Bajaj and Guni Sharon and Peter Stone},
  title = {Task Phasing: Automated Curriculum Learning from Demonstrations},
  booktitle = {Proceedings of the 33rd International Conference on Automated Planning and Scheduling (ICAPS 2023)},
  location = {Prague, Czech Republic},
  month = {July},
  year = {2023},
  pages = {},
  abstract = {
  Applying reinforcement learning (RL) to sparse reward do-
  mains is notoriously challenging due to insufficient guiding
  signals. Common RL techniques for addressing such domains
  include (1) learning from demonstrations and (2) curriculum
  learning. While these two approaches have been studied in
  detail, they have rarely been considered together. This pa-
  per aims to do so by introducing a principled task phasing
  approach that uses demonstrations to automatically gener-
  ate a curriculum sequence. Using inverse RL from (subopti-
  mal) demonstrations we define a simple initial task. Our task
  phasing approach then provides a framework to gradually in-
  crease the complexity of the task all the way to the target task,
  while retuning the RL agent in each phasing iteration. Two
  approaches for phasing are considered: (1) gradually increas-
  ing the proportion of time steps an RL agent is in control, and
  (2) phasing out a guiding informative reward function. We
  present conditions that guarantee the convergence of these
  approaches to an optimal policy. Experimental results on 3
  sparse reward domains demonstrate that our task phasing ap-
  proaches outperform state-of-the-art approaches with respect
  to asymptotic performance.
  wwwnote={Accompanying code}