Catch up on the latest AI articles

Can Transformer Be Applied To Reinforcement Learning?

Can Transformer Be Applied To Reinforcement Learning?

Reinforcement Learning

3 main points
✔️ Applying the transformer to Reinforcement Learning
✔️ GTrXL is proposed as a modified transformer to stabilize the learning process.
✔️ Performance and robustness exceeding LSTM

Stabilizing Transformers for Reinforcement Learning
written by Emilio ParisottoH. Francis SongJack W. RaeRazvan PascanuCaglar GulcehreSiddhant M. JayakumarMax JaderbergRaphael Lopez KaufmanAidan ClarkSeb NouryMatthew M. BotvinickNicolas HeessRaia Hadsell
(Submitted on 13 Oct 2019)

Comments: Accepted to ICML2020
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
Paper 
Official Code COMM Code

Introduction

The transformers proposed at "Attention is all you need" have been very successful in various domains. In particular, they have a large presence in natural language processing, and their performance and growth rate are astonishing, especially in the area of prior learning models such as BERT, and especially in GPT-3, which has recently become a major topic of discussion. And this success is not limited to natural language processing. For example, its power has been demonstrated in the area of image processing, such as DETR for object detection and Image GPT for unsupervised representation learning. So, how many areas can we expect to see transformers applied to? How versatile is it?
In this article, we present a paper that successfully applied the Transformer to reinforcement learning and brought out its capabilities.

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

OR

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us