Machine learning technique where agents learn from demonstrations
Imitation learning is a paradigm in reinforcement learning, where an agent learns to perform a task by supervised learning from expert demonstrations. It is also called learning from demonstration and apprenticeship learning.[1][2][3]
It has been applied to underactuated robotics,[4] self-driving cars,[5][6][7] quadcopter navigation,[8] helicopter aerobatics,[9] and locomotion.[10][11]
Approaches
Expert demonstrations are recordings of an expert performing the desired task, often collected as state-action pairs .
Behavior Cloning
Behavior Cloning (BC) is the most basic form of imitation learning. Essentially, it uses supervised learning to train a policy such that, given an observation , it would output an action distribution that is approximately the same as the action distribution of the experts.[12]
BC is susceptible to distribution shift. Specifically, if the trained policy differs from the expert policy, it might find itself straying from expert trajectory into observations that would have never occurred in expert trajectories.[12]
This was already noted by ALVINN, where they trained a neural network to drive a van using human demonstrations. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path.[5]
DAgger
Dagger (Dataset Aggregation)[13] improves on behavior cloning by iteratively training on a dataset of expert demonstrations. In each iteration, the algorithm first collects data by rolling out the learned policy . Then, it queries the expert for the optimal action on each observation encountered during the rollout. Finally, it aggregates the new data into the datasetand trains a new policy on the aggregated dataset.[12]
Decision transformer
Architecture diagram of the decision transformer.
The Decision Transformer approach models reinforcement learning as a sequence modelling problem.[14] Similar to Behavior Cloning, it trains a sequence model, such as a Transformer, that models rollout sequences where is the sum of future reward in the rollout. During training time, the sequence model is trained to predict each action , given the previous rollout as context:During inference time, to use the sequence model as an effective controller, it is simply given a very high reward prediction , and it would generalize by predicting an action that would result in the high reward. This was shown to scale predictably to a Transformer with 1 billion parameters that is superhuman on 41 Atari games.[15]
Inverse Reinforcement Learning (IRL) learns a reward function that explains the expert's behavior and then uses reinforcement learning to find a policy that maximizes this reward.[18]
Generative Adversarial Imitation Learning (GAIL) uses generative adversarial networks (GANs) to match the distribution of agent behavior to the distribution of expert demonstrations.[19] It extends a previous approach using game theory.[20][16]
^Russell, Stuart J.; Norvig, Peter (2021). "22.6 Apprenticeship and Inverse Reinforcement Learning". Artificial intelligence: a modern approach. Pearson series in artificial intelligence (Fourth ed.). Hoboken: Pearson. ISBN978-0-13-461099-3.
^Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (Second ed.). Cambridge, Massachusetts: The MIT Press. p. 470. ISBN978-0-262-03924-6.