Featuring Offline Reinforcement Learning! Part 1
3 main points
✔️ Offline RL, which learns strategies using only previously collected data
✔️ Offlne RL is expected to have a variety of applications in healthcare, robotics and other fields.
✔️ The main problem with Offlne RL is distribution shift.
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
written bySergey Levine, Aviral Kumar, George Tucker, Justin Fu
(Submitted on 4 May 2020)
Subjects: Machine Learning (cs.LG), Artificial Intelligence (cs.AI), Machine Learning (stat.ML)
In recent years, there has been a shift away from online learning methods where data is collected at the same time as learning to reinforcement learning using only previously collected data. There has been a lot of research on offline reinforcement learning (offline RL). Offline reinforcement learning has the advantage of being able to use large data sets that have been collected in the past to learn more efficiently, while simultaneously performing actions on the environment and collecting data at the same time, which can be very time-consuming. Offline RL is gaining attention because of its potential to be effective in a variety of fields, including healthcare, education, and robotics.
However, due to various problems, offline RL has not yet been able to achieve this goal, and various studies have been carried out. In this special edition of Offline RL, we will discuss what the problems are, what kind of research has been done in the past, and the future prospects for Offline RL. In this first article, we will explain in detail what Offline RL is, how it is applied, and why it is so difficult.
To read more,
Please register with AI-SCHOLAR.OR
Categories related to this article