ALOHa: A New Measure for Hallucination in Captioning Models

University of California, Berkeley
NAACL 2024

*Indicates Equal Contribution
ALOHa teaser.

(Top) The SOTA prior object hallucination metric, CHAIR, is limited to MS COCO objects, and fails to detect the hallucinations in this image caption while ALOHa (ours, bottom) correctly assigns low similarity scores to the hallucinations "baseball player" and "bat". ALOHa does not penalize the caption for "catcher", "umpire", and "bass drum", as the caption indicates uncertainty of their presence.

Abstract

Despite recent advances in multimodal pre-training for visual description, state-of-the-art models still produce captions containing errors, such as hallucinating objects that are not present in a scene. The existing prominent metric for object hallucination, CHAIR, is limited to a fixed set of MS COCO objects and synonyms. In this work, we propose a modernized open-vocabulary metric, ALOHa, which leverages large language models (LLMs) to measure object hallucinations. Specifically, we use an LLM to extract groundable objects from a candidate caption, measure their semantic similarity to reference objects from captions and/or object detections, and use Hungarian matching to produce a final hallucination score. We show that ALOHa correctly identifies 13.6% more hallucinated objects than CHAIR on HAT, a new gold-standard subset of MS COCO Captions annotated for hallucinations, and 30.8% more on nocaps, where objects extend beyond MS COCO categories.