반응형
1. 핵심 요약
OriginalBERT
: sentence semantic similarity에서 poor performance
- 이유
- static token embeddings biases and the ineffective BERT layers
- NOT the high cosine similarity of the sentence embeddings
- 모델
- prompt based sentence embeddings method
- reduce token embeddings biases
- make the original BERT layers more effective
- reformulating sentence embeddings task → fillin-the-blanks problem
- 2 prompt representing methods and 3 prompt searching methods
- prompt based sentence embeddings method
- Experiments
- both non fine-tuned and fine- tuned settings
- non fine-tuned method PromptBERT > unsupervised ConSERT on STS tasks
- fine- tuned method PromptBERT > SOTA in SimCSE in both unsupervised and supervised settings
2. 논문 링크
https://arxiv.org/abs/2201.04337
3. 논문 설명 링크
반응형