NLP

[Prompt Learning] Prompting Contrastive Explanations for Commonsense Reasoning Tasks

코딩무민 2022. 5. 18. 16:10
반응형

1. 핵심 요약 

  • PLM : near-human performance 만큼 achieved, BUT human-interpretable evidence에는 약함

    → 이를 해결하기 위해 PLM을 “contrast alternatives”를 이용한 explanation prompts로 완성

      e.g. peanuts are usally salty while raisins are sweet

 

2. 논문 링크

https://arxiv.org/abs/2106.06823

 

Prompting Contrastive Explanations for Commonsense Reasoning Tasks

Many commonsense reasoning NLP tasks involve choosing between one or more possible answers to a question or prompt based on knowledge that is often implicit. Large pretrained language models (PLMs) can achieve near-human performance on such tasks, while pr

arxiv.org

 

3. 논문 설명 링크 

https://coding-moomin.notion.site/Prompting-Contrastive-Explanations-for-Commonsense-Reasoning-Tasks-9da320bfd83d4caaa0ceb990239d9337

 

Prompting Contrastive Explanations for Commonsense Reasoning Tasks

Abstract

coding-moomin.notion.site

 

반응형