VTechWorks staff will be away for the Thanksgiving holiday beginning at noon on Wednesday, November 27, through Friday, November 29. We will resume normal operations on Monday, December 2. Thank you for your patience.
 

Self-supervised Short Text Modeling through Auxiliary Context Generation

dc.contributor.authorChoudhary, Nurendraen
dc.contributor.authorAggarwal, Charuen
dc.contributor.authorSubbian, Karthiken
dc.contributor.authorReddy, Chandan K.en
dc.date.accessioned2022-09-23T20:34:19Zen
dc.date.available2022-09-23T20:34:19Zen
dc.date.issued2022-04-12en
dc.date.updated2022-09-23T20:31:04Zen
dc.description.abstractShort text is ambiguous and often relies predominantly on the domain and context at hand in order to attain semantic relevance. Existing classification models perform poorly on short text due to data sparsity and inadequate context. Auxiliary context, which can often provide sufficient background regarding the domain, is typically available in several application scenarios. While some of the existing works aim to leverage real-world knowledge to enhance short text representations, they fail to place appropriate emphasis on the auxiliary context. Such models do not harness the full potential of the available context in auxiliary sources. To address this challenge, we reformulate short text classification as a dual channel self-supervised learning problem (that leverages auxiliary context) with a generation network and a corresponding prediction model. We propose a self-supervised framework, Pseudo-Auxiliary Context generation network for Short text Modeling (PACS), to comprehensively leverage auxiliary context and is jointly learned with a prediction network in an end-to-end manner. Our PACS model consists of two sub-networks: a Context Generation Network (CGN) that models the auxiliary context?s distribution and a Prediction Network (PN) to map the short text features and auxiliary context distribution to the final class label. Our experimental results on diverse datasets demonstrate that PACS outperforms formidable state-of-the-art baselines. We also demonstrate the performance of our model on cold start scenarios (where contextual information is non-existent) during prediction. Furthermore, we perform interpretability and ablation studies to analyze various representational features captured by our model and the individual contribution of its modules to the overall performance of PACS, respectively.en
dc.description.versionPublished versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.doihttps://doi.org/10.1145/3511712en
dc.identifier.issue3en
dc.identifier.other51en
dc.identifier.urihttp://hdl.handle.net/10919/111992en
dc.identifier.volume13en
dc.language.isoenen
dc.publisherACMen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.holderThe author(s)en
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.titleSelf-supervised Short Text Modeling through Auxiliary Context Generationen
dc.title.serialACM Transactions on Intelligent Systems and Technologyen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
3511712.pdf
Size:
2.61 MB
Format:
Adobe Portable Document Format
Description:
Published version
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: