NLP

[Dialog Response Selection] Do Response Selection Models Really Know What’s Next? Utterance Manipulation Strategies For Multi-turn Response Selection

코딩무민 2022. 5. 24. 17:18
반응형

1. 핵심 요약 

  • user, system의 utterance history로 optimal response를 찾는 task
  • pre-trained language model이 다양한 NLP tasks에서 좋은 성능을 보이고 있음
  • →response selection tasks에서는 이를 dialog–response binary classification tasks로 품
  • 위의 방법론은 sequential nature of multi-turn dialog system을 무시
  • 위 논문
    • the response selection task 하나는 불충분e.g. insertion, deletion, search 등 다양한 방법 제시
    • → dialog coherence 유지에 도움이 됨
    • : utterance manipulation strategies (UMS)
    • UMS : self-supervised model → existing approaches에 쉽게 통합됨

2. 논문 링크

https://www.aaai.org/AAAI21Papers/AAAI-6746.WhangT.pdf

3. 논문 설명 링크 

https://coding-moomin.notion.site/Do-Response-Selection-Models-Really-Know-What-s-Next-Utterance-Manipulation-Strategies-For-Multi-tu-e4b56aa68c804997a815ebb865cb468a

 

Do Response Selection Models Really Know What’s Next? Utterance Manipulation Strategies For Multi-turn Response Selection

Abstract

coding-moomin.notion.site

 

반응형