Finding Proxy For Human Evaluation Re-evaluating the evaluation of news summarization

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Dhirubhai Ambani Institute of Information and Communication Technology

Abstract

Engaging human annotators to evaluate every summary in a content summarization system is not feasible. Automatic evaluation metrics act as a proxy for human evaluation. A high correlation with human evaluation determines the effectiveness of a given metric. This thesis compares 40 different evaluation metrics with human judgments in terms of correlation and investigates whether the contextual similarity based metrics are better than lexical overlap based metrics, i.e., ROUGE score. The comparison shows that contextual similarity based metrics have a high correlation with human judgments than lexical overlap based metrics. Thus, such metrics can act as a good proxy for human judgment.

Description

Citation

Ranpara, Tarang J. (2022). Finding Proxy For Human Evaluation Re-evaluating the evaluation of news summarization. Dhirubhai Ambani Institute of Information and Communication Technology. xi, 51 p. (Acc. # T01042).

Endorsement

Review

Supplemented By

Referenced By