INSTRUCTSCORE: Explainable Text Generation Evaluation with Finegrained Feedback

UC Santa Barbara, †Google Research, ††Carnegie Mellon University
EMNLP 2023 Main Conference

Abstract

Automatically evaluating the quality of language generation is critical. Although recent learned metrics show high correlation with human judgement, these metrics can not explain their verdict or associate the scores with defects in generated text. To address this limitation, we present InstructScore, an explainable evaluation metric for text generation. By harnessing both explicit human instruction and the implicit knowledge of GPT-4, we fine-tune a text evaluation metric based on LLaMA, producing both a score for generated text and a human readable diagnostic report. We evaluate InstructScore on a variety of generation tasks, including translation, captioning, data-to-text and commonsense generation. Experiments show that our 7B model surpasses all other unsupervised metrics, including those based on 175B GPT-3 and GPT-4. Surprisingly, our InstructScore, even without direct supervision from human-rated data, achieves performance levels on par with state-of-the-art metrics like COMET22, which were fine-tuned on human ratings.

Failure Modes

Fields Explanation Failure Mode Description
Local Failure Mode
Error Type Inconsistency to explanation M1: Error type descriptions are not consistent with explanation
Error Location Inconsistency to explanation M2: Error locations are not consistent with the explanation
Error location hallucination M3: Error locations are not referred in the output text
Major/Minor Major/Minor disagreement M5: Major and minor labels do not correspond to the correct severity levels
Explanation Error location hallucination M4: Error locations can not refer to the output text
Explanation failure M6: The explanation is wrong. However, error at a specified location does exist
Global Failure Mode
All 4 Fields False negative error G1: Error described in the explanation is not an error
Repetition G2: One error is mentioned more than once among explanations
Phrase misalignment G3: Incorrect phrase and correct phrase are not correctly aligned
Mention multiple errors G4: One error span mentions multiple errors

Common failure modes of the explanation output of first step Exp-Generator (Fine-tuned LLaMA on synthetic data without refinement). Local errors are field-specific, which only correspond to the error of the local field. Global errors can affect all four fields, such as error type, error location, major/minor, and explanation. The observation of the failures modes at first step Exp-Generator is the main motivation for us to perform refinement with automatic feedback.

BibTeX

@article{xu2023instructscore,
        title={Instructscore: Towards explainable text generation evaluation with automatic feedback},
        author={Xu, Wenda and Wang, Danqing and Pan, Liangming and Song, Zhenqiao and Freitag, Markus and Wang, William Yang and Li, Lei},
        journal={The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2023},
        year={2023}
}