Browsing by Author "Reddy, Chandan"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- Accepted Tutorials at The Web Conference 2022Tommasini, Riccardo; Basu Roy, Senjuti; Wang, Xuan; Wang, Hongwei; Ji, Heng; Han, Jiawei; Nakov, Preslav; Da San Martino, Giovanni; Alam, Firoj; Schedl, Markus; Lex, Elisabeth; Bharadwaj, Akash; Cormode, Graham; Dojchinovski, Milan; Forberg, Jan; Frey, Johannes; Bonte, Pieter; Balduini, Marco; Belcao, Matteo; Della Valle, Emanuele; Yu, Junliang; Yin, Hongzhi; Chen, Tong; Liu, Haochen; Wang, Yiqi; Fan, Wenqi; Liu, Xiaorui; Dacon, Jamell; Lye, Lingjuan; Tang, Jiliang; Gionis, Aristides; Neumann, Stefan; Ordozgoiti, Bruno; Razniewski, Simon; Arnaout, Hiba; Ghosh, Shrestha; Suchanek, Fabian; Wu, Lingfei; Chen, Yu; Li, Yunyao; Liu, Bang; Ilievski, Filip; Garijo, Daniel; Chalupsky, Hans; Szekely, Pedro; Kanellos, Ilias; Sacharidis, Dimitris; Vergoulis, Thanasis; Choudhary, Nurendra; Rao, Nikhil; Subbian, Karthik; Sengamedu, Srinivasan; Reddy, Chandan; Victor, Friedhelm; Haslhofer, Bernhard; Katsogiannis- Meimarakis, George; Koutrika, Georgia; Jin, Shengmin; Koutra, Danai; Zafarani, Reza; Tsvetkov, Yulia; Balachandran, Vidhisha; Kumar, Sachin; Zhao, Xiangyu; Chen, Bo; Guo, Huifeng; Wang, Yejing; Tang, Ruiming; Zhang, Yang; Wang, Wenjie; Wu, Peng; Feng, Fuli; He, Xiangnan (ACM, 2022-04-25)This paper summarizes the content of the 20 tutorials that have been given at The Web Conference 2022: 85% of these tutorials are lecture style, and 15% of these are hands on.
- GraphZoo: A Development Toolkit for Graph Neural Networks with Hyperbolic GeometriesVyas, Anoushka; Choudhary, Nurendra; Khatir, Mehrdad; Reddy, Chandan (ACM, 2022-04-25)Hyperbolic spaces have recently gained prominence for representation learning in graph processing tasks such as link prediction and node classification. Several Euclidean graph models have been adapted to work in the hyperbolic space and the variants have shown a significant increase in performance. However, research and development in graph modeling currently involve several tedious tasks with a scope of standardization including data processing, parameter configuration, optimization tricks, and unavailability of public codebases. With the proliferation of new tasks such as knowledge graph reasoning and generation, there is a need in the community for a unified framework that eases the development and analysis of both Euclidean and hyperbolic graph networks, especially for new researchers in the field. To this end, we present a novel framework, GraphZoo, that makes learning, designing and applying graph processing pipelines/models systematic through abstraction over the redundant components. The framework contains a versatile library that supports several hyperbolic manifolds and an easy-to-use modular framework to perform graph processing tasks which aids researchers in different components, namely, (i) reproduce evaluation pipelines of state-of-the-art approaches, (ii) design new hyperbolic or Euclidean graph networks and compare them against the state-of-the-art approaches on standard benchmarks, (iii) add custom datasets for evaluation, (iv) add new tasks and evaluation criteria.
- An Interpretable Ensemble of Graph and Language Models for Improving Search Relevance in E-CommerceChoudhary, Nurendra; Huang, Edward W.; Subbian, Karthik; Reddy, Chandan (ACM, 2024-05-13)The problem of search relevance in the E-commerce domain is a challenging one since it involves understanding the intent of a user’s short nuanced query and matching it with the appropriate products in the catalog. This problem has traditionally been addressed using language models (LMs) and graph neural networks (GNNs) to capture semantic and inter-product behavior signals, respectively. However, the rapid development of new architectures has created a gap between research and the practical adoption of these techniques. Evaluating the generalizability of these models for deployment requires extensive experimentation on complex, real-world datasets, which can be non-trivial and expensive. Furthermore, such models often operate on latent space representations that are incomprehensible to humans, making it difficult to evaluate and compare the effectiveness of different models. This lack of interpretability hinders the development and adoption of new techniques in the field. To bridge this gap, we propose Plug and Play Graph LAnguage Model (PP-GLAM), an explainable ensemble of plug and play models. Our approach uses a modular framework with uniform data processing pipelines. It employs additive explanation metrics to independently decide whether to include (i) language model candidates, (ii) GNN model candidates, and (iii) inter-product behavioral signals. For the task of search relevance, we show that PP-GLAM outperforms several state-of-the-art baselines as well as a proprietary model on real-world multilingual, multi-regional e-commerce datasets. To promote better model comprehensibility and adoption, we also provide an analysis of the explainability and computational complexity of our model. We also provide the public codebase and provide a deployment strategy for practical implementation.
- Joint Biomedical Event Extraction and Entity Linking via Iterative Collaborative TrainingLi, Xiaochu (Virginia Tech, 2023-05)Biomedical entity linking and event extraction are two crucial tasks to support text understanding and retrieval in the biomedical domain. These two tasks intrinsically benefit each other: entity linking disambiguates the biomedical concepts by referring to external knowledge bases and the domain knowledge further provides additional clues to understand and extract the biological processes, while event extraction identifies a key trigger and entities involved to describe each biological process which also captures the structural context to better disambiguate the biomedical entities. However, previous research typically solves these two tasks separately or in a pipeline, leading to error propagation. What's more, it's even more challenging to solve these two tasks together as there is no existing dataset that contains annotations for both tasks. To solve these challenges, we propose joint biomedical entity linking and event extraction by regarding the event structures and entity references in knowledge bases as latent variables and updating the two task-specific models in an iterative training framework: (1) predicting the missing variables for each partially annotated dataset based on the current two task-specific models, and (2) updating the parameters of each model on the corresponding pseudo completed dataset. Experimental results on two benchmark datasets: Genia 2011 for event extraction and BC4GO for entity linking, show that our joint framework significantly improves the model for each individual task and outperforms the strong baselines for both tasks. We will make the code and model checkpoints publicly available once the paper is accepted.
- Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-SeriesTipirneni, Sindhu; Reddy, Chandan (ACM, 2022-07-30)Multivariate time-series data are frequently observed in critical care settings and are typically characterized by sparsity (missing information) and irregular time intervals. Existing approaches for learning representations in this domain handle these challenges by either aggregation or imputation of values, which in-turn suppresses the fine-grained information and adds undesirable noise/overhead into the machine learning model. To tackle this problem, we propose a Self-supervised Transformer for Time-Series (STraTS) model which overcomes these pitfalls by treating time-series as a set of observation triplets instead of using the standard dense matrix representation. It employs a novel Continuous Value Embedding technique to encode continuous time and variable values without the need for discretization. It is composed of a Transformer component with multi-head attention layers which enable it to learn contextual triplet embeddings while avoiding the problems of recurrence and vanishing gradients that occur in recurrent architectures. In addition, to tackle the problem of limited availability of labeled data (which is typically observed in many healthcare applications), STraTS utilizes self-supervision by leveraging unlabeled data to learn better representations by using time-series forecasting as an auxiliary proxy task. Experiments on real-world multivariate clinical time-series benchmark datasets demonstrate that STraTS has better prediction performance than state-of-the-art methods for mortality prediction, especially when labeled data is limited. Finally, we also present an interpretable version of STraTS which can identify important measurements in the time-series data. Our data preprocessing and model implementation codes are available at https://github.com/sindhura97/STraTS.
- StructCoder: Structure-Aware Transformer for Code GenerationTipirneni, Sindhu; Zhu, Ming; Reddy, Chandan (ACM, 2024)There has been a recent surge of interest in automating software engineering tasks using deep learning. This paper addresses the problem of code generation, where the goal is to generate target code given source code in a different language or a natural language description. Most state-of-the-art deep learning models for code generation use training strategies primarily designed for natural language. However, understanding and generating code requires a more rigorous comprehension of the code syntax and semantics. With this motivation, we develop an encoder-decoder Transformer model where both the encoder and decoder are explicitly trained to recognize the syntax and data flow in the source and target codes, respectively. We not only make the encoder structure-aware by leveraging the source code?s syntax tree and data flow graph, but we also support the decoder in preserving the syntax and data flow of the target code by introducing two novel auxiliary tasks: AST (Abstract Syntax Tree) paths prediction and data flow prediction. To the best of our knowledge, this is the first work to introduce a structure-aware Transformer decoder that models both syntax and data flow to enhance the quality of generated code. The proposed StructCoder model achieves state-of-the-art performance on code translation and text-to-code generation tasks in the CodeXGLUE benchmark, and improves over baselines of similar size on the APPS code generation benchmark. Our code is publicly available at https://github.com/reddy-lab-code-research/StructCoder/.
- Supervised Contrastive Learning for Interpretable Long-Form Document MatchingJha, Akshita; Rakesh, Vineeth; Chandrashekar, Jaideep; Samavedhi, Adithya; Reddy, Chandan (ACM, 2022)Recent advancements in deep learning techniques have transformed the area of semantic text matching. However, most state-of-the-art models are designed to operate with short documents such as tweets, user reviews, comments, etc. These models have fundamental limitations when applied to long-form documents such as scientific papers, legal documents, and patents. When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of "similarity"), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content. In this paper, we describe CoLDE: Contrastive Long Document Encoder ? a transformer-based framework that addresses these challenges and allows for interpretable comparisons of long documents. CoLDE uses unique positional embeddings and a multi-headed chunkwise attention layer in conjunction with a supervised contrastive learning framework to capture similarity at three different levels: (i) high-level similarity scores between a pair of documents, (ii) similarity scores between different sections within and across documents, and (iii) similarity scores between different chunks in the same document and across other documents. These fine-grained similarity scores aid in better interpretability. We evaluate CoLDE on three long document datasets namely, ACL Anthology publications, Wikipedia articles, and USPTO patents. Besides outperforming the state-of-the-art methods on the document matching task, CoLDE is also robust to changes in document length and text perturbations and provides interpretable results. The code for the proposed model is publicly available at https://github.com/InterDigitalInc/CoLDE.