Great Service! Fine-grained Parsing of Implicit Arguments

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Standard

Great Service! Fine-grained Parsing of Implicit Arguments. / Cui, Ruixiang; Hershcovich, Daniel.

Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021). Association for Computational Linguistics, 2021. p. 65-77.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Harvard

Cui, R & Hershcovich, D 2021, Great Service! Fine-grained Parsing of Implicit Arguments. in Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021). Association for Computational Linguistics, pp. 65-77, 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), Online, 04/08/2021. https://doi.org/10.18653/v1/2021.iwpt-1.7

APA

Cui, R., & Hershcovich, D. (2021). Great Service! Fine-grained Parsing of Implicit Arguments. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021) (pp. 65-77). Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.iwpt-1.7

Vancouver

Cui R, Hershcovich D. Great Service! Fine-grained Parsing of Implicit Arguments. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021). Association for Computational Linguistics. 2021. p. 65-77 https://doi.org/10.18653/v1/2021.iwpt-1.7

Author

Cui, Ruixiang ; Hershcovich, Daniel. / Great Service! Fine-grained Parsing of Implicit Arguments. Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021). Association for Computational Linguistics, 2021. pp. 65-77

Bibtex

@inproceedings{e375ab00931a446c9195f6c2214b89e9,
title = "Great Service! Fine-grained Parsing of Implicit Arguments",
abstract = "Broad-coverage meaning representations in NLP mostly focus on explicitly expressed content. More importantly, the scarcity of datasets annotating diverse implicit roles limits empirical studies into their linguistic nuances. For example, in the web review “Great service!”, the provider and consumer are implicit arguments of different types. We examine an annotated corpus of fine-grained implicit arguments (Cui and Hershcovich, 2020) by carefully re-annotating it, resolving several inconsistencies. Subsequently, we present the first transition-based neural parser that can handle implicit arguments dynamically, and experiment with two different transition systems on the improved dataset. We find that certain types of implicit arguments are more difficult to parse than others and that the simpler system is more accurate in recovering implicit arguments, despite having a lower overall parsing score, attesting current reasoning limitations of NLP models. This work will facilitate a better understanding of implicit and underspecified language, by incorporating it holistically into meaning representations.",
author = "Ruixiang Cui and Daniel Hershcovich",
year = "2021",
doi = "10.18653/v1/2021.iwpt-1.7",
language = "English",
pages = "65--77",
booktitle = "Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)",
publisher = "Association for Computational Linguistics",
note = "17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021) ; Conference date: 04-08-2021 Through 04-08-2021",

}

RIS

TY - GEN

T1 - Great Service! Fine-grained Parsing of Implicit Arguments

AU - Cui, Ruixiang

AU - Hershcovich, Daniel

PY - 2021

Y1 - 2021

N2 - Broad-coverage meaning representations in NLP mostly focus on explicitly expressed content. More importantly, the scarcity of datasets annotating diverse implicit roles limits empirical studies into their linguistic nuances. For example, in the web review “Great service!”, the provider and consumer are implicit arguments of different types. We examine an annotated corpus of fine-grained implicit arguments (Cui and Hershcovich, 2020) by carefully re-annotating it, resolving several inconsistencies. Subsequently, we present the first transition-based neural parser that can handle implicit arguments dynamically, and experiment with two different transition systems on the improved dataset. We find that certain types of implicit arguments are more difficult to parse than others and that the simpler system is more accurate in recovering implicit arguments, despite having a lower overall parsing score, attesting current reasoning limitations of NLP models. This work will facilitate a better understanding of implicit and underspecified language, by incorporating it holistically into meaning representations.

AB - Broad-coverage meaning representations in NLP mostly focus on explicitly expressed content. More importantly, the scarcity of datasets annotating diverse implicit roles limits empirical studies into their linguistic nuances. For example, in the web review “Great service!”, the provider and consumer are implicit arguments of different types. We examine an annotated corpus of fine-grained implicit arguments (Cui and Hershcovich, 2020) by carefully re-annotating it, resolving several inconsistencies. Subsequently, we present the first transition-based neural parser that can handle implicit arguments dynamically, and experiment with two different transition systems on the improved dataset. We find that certain types of implicit arguments are more difficult to parse than others and that the simpler system is more accurate in recovering implicit arguments, despite having a lower overall parsing score, attesting current reasoning limitations of NLP models. This work will facilitate a better understanding of implicit and underspecified language, by incorporating it holistically into meaning representations.

U2 - 10.18653/v1/2021.iwpt-1.7

DO - 10.18653/v1/2021.iwpt-1.7

M3 - Article in proceedings

SP - 65

EP - 77

BT - Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)

PB - Association for Computational Linguistics

T2 - 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)

Y2 - 4 August 2021 through 4 August 2021

ER -

ID: 300917340