How Far Has Google's BERT, Google's Signature Technology For Natural Language Processing, Gone In Research? The Frontiers Of BERTology
3 main points
✔️ Introduction to the survey paper on BERT, a central player in natural language processing
✔️ A comprehensive description of research directions and issues related to BERT in two parts
✔️ The study that looks at what BERT captures
A Primer in BERTology: What we know about how BERT works
written by Anna Rogers, Olga Kovaleva, Anna Rumshisky
(Submitted on 27 Feb 2020)
Comments: Published by arXiv
Subjects: Commputation and Language(cs.CL)
Paper Official Code COMM Code
The presence of a transformer-based pre-training language model called BERT, a transformer-based pre-training language model, is indispensable for recent developments in natural language processing technology; at the time of its release in 2018, it was the most accurate at the time in the GLUE task, a benchmark for measuring overall natural language understanding It surprised us by improving our OpenAI GPT score by 7%. Since then, BERT's general-purpose, high language processing power has become a benchmark and a central part of its work in various tasks. In this article, we'll present the extensive research on BERT and look at the challenges it faces.
To read more,
Please register with AI-SCHOLAR.
ORCategories related to this article