The Role of Data Curation in Image Captioning

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 7.71 MB, PDF document

Image captioning models are typically trained by treating all samples equally, neglecting to account for mismatched or otherwise difficult data points. In contrast, recent work has shown the effectiveness of training models by scheduling the data using curriculum learning strategies. This paper contributes to this direction by actively curating difficult samples in datasets without increasing the total number of samples. We explore the effect of using three data curation methods within the training process: complete removal of a sample, caption replacement, or image replacement via a text-to-image generation model. Experiments on the Flickr30K and COCO datasets with the BLIP and BEiT-3 models demonstrate that these curation methods do indeed yield improved image captioning models, underscoring their efficacy.

Original languageEnglish
Title of host publicationEACL 2024 - 18th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference
EditorsYvette Graham, Matthew Purver, Matthew Purver
Number of pages15
PublisherAssociation for Computational Linguistics (ACL)
Publication date2024
Pages1074-1088
ISBN (Electronic)9798891760882
Publication statusPublished - 2024
Event18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 - St. Julian's, Malta
Duration: 17 Mar 202422 Mar 2024

Conference

Conference18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024
LandMalta
BySt. Julian's
Periode17/03/202422/03/2024
SponsorAdobe, Babelscape, Bloomberg Engineering, Megagon Labs, Snowflake

Bibliographical note

Publisher Copyright:
© 2024 Association for Computational Linguistics.

Links

ID: 392216501