NLP
[BERT 논문 리뷰] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
코딩무민
2022. 3. 25. 14:33
반응형
1. Introduction
- 2018년 10월 논문 공개
- 당시, NLP 11개 task에 SOTA 달성
- SQuAD v1.1에서 인간보다 더 높은 정확도를 보여 주목을 받음
- 논문 제목
- BERT: Pre-trainig of Deep Bidirectional Transformers for Language Understanding
논문 링크
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unla
arxiv.org
설명 링크
https://coding-moomin.notion.site/BERT-f351597381d84e369bb429340593504b
BERT
content
coding-moomin.notion.site
반응형