NLP

[PromptBERT 논문리뷰] PromptBERT: Improving BERT Sentence Embeddings with Prompts

코딩무민 2022. 5. 18. 16:14
반응형

1. 핵심 요약 

OriginalBERT

: sentence semantic similarity에서 poor performance

  • 이유
    • static token embeddings biases and the ineffective BERT layers
    • NOT the high cosine similarity of the sentence embeddings
  • 모델
    • prompt based sentence embeddings method
      • reduce token embeddings biases
      • make the original BERT layers more effective
    • reformulating sentence embeddings task → fillin-the-blanks problem
    • 2 prompt representing methods and 3 prompt searching methods
  • Experiments
    • both non fine-tuned and fine- tuned settings
    • non fine-tuned method PromptBERT > unsupervised ConSERT on STS tasks
    • fine- tuned method PromptBERT > SOTA in SimCSE in both unsupervised and supervised settings

2. 논문 링크

https://arxiv.org/abs/2201.04337

 

PromptBERT: Improving BERT Sentence Embeddings with Prompts

The poor performance of the original BERT for sentence semantic similarity has been widely discussed in previous works. We find that unsatisfactory performance is mainly due to the static token embeddings biases and the ineffective BERT layers, rather than

arxiv.org

 

3. 논문 설명 링크 

https://coding-moomin.notion.site/PromptBERT-Improving-BERT-Sentence-Embeddings-with-Prompts-fa4a46e231da4cf1b37061c06f4447c0

 

PromptBERT: Improving BERT Sentence Embeddings with Prompts

0. Abstract

coding-moomin.notion.site

 

반응형