Supervised Contrastive Learning for Interpretable Long-Form Document Matching

dc.contributor.authorJha, Akshitaen
dc.contributor.authorRakesh, Vineethen
dc.contributor.authorChandrashekar, Jaideepen
dc.contributor.authorSamavedhi, Adithyaen
dc.contributor.authorReddy, Chandanen
dc.date.accessioned2022-10-03T16:34:49Zen
dc.date.available2022-10-03T16:34:49Zen
dc.date.issued2022en
dc.date.updated2022-09-30T20:25:59Zen
dc.description.abstractRecent advancements in deep learning techniques have transformed the area of semantic text matching. However, most state-of-the-art models are designed to operate with short documents such as tweets, user reviews, comments, etc. These models have fundamental limitations when applied to long-form documents such as scientific papers, legal documents, and patents. When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of "similarity"), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content. In this paper, we describe CoLDE: Contrastive Long Document Encoder ? a transformer-based framework that addresses these challenges and allows for interpretable comparisons of long documents. CoLDE uses unique positional embeddings and a multi-headed chunkwise attention layer in conjunction with a supervised contrastive learning framework to capture similarity at three different levels: (i) high-level similarity scores between a pair of documents, (ii) similarity scores between different sections within and across documents, and (iii) similarity scores between different chunks in the same document and across other documents. These fine-grained similarity scores aid in better interpretability. We evaluate CoLDE on three long document datasets namely, ACL Anthology publications, Wikipedia articles, and USPTO patents. Besides outperforming the state-of-the-art methods on the document matching task, CoLDE is also robust to changes in document length and text perturbations and provides interpretable results. The code for the proposed model is publicly available at https://github.com/InterDigitalInc/CoLDE.en
dc.description.versionAccepted versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.1145/3542822en
dc.identifier.urihttp://hdl.handle.net/10919/112054en
dc.language.isoenen
dc.publisherACMen
dc.rightsIn Copyrighten
dc.rights.holderThe author(s)en
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.titleSupervised Contrastive Learning for Interpretable Long-Form Document Matchingen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
3542822.pdf
Size:
795.01 KB
Format:
Adobe Portable Document Format
Description:
Accepted version
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: