Browsing by Author "Lourentzou, Ismini"
Now showing 1 - 20 of 46
Results Per Page
Sort Options
- Achieving More with Less: Learning Generalizable Neural Networks With Less Labeled Data and Computational OverheadsBu, Jie (Virginia Tech, 2023-03-15)Recent advancements in deep learning have demonstrated its incredible ability to learn generalizable patterns and relationships automatically from data in a number of mainstream applications. However, the generalization power of deep learning methods largely comes at the costs of working with very large datasets and using highly compute-intensive models. Many applications cannot afford these costs needed to ensure generalizability of deep learning models. For instance, obtaining labeled data can be costly in scientific applications, and using large models may not be feasible in resource-constrained environments involving portable devices. This dissertation aims to improve efficiency in machine learning by exploring different ways to learn generalizable neural networks that require less labeled data and computational resources. We demonstrate that using physics supervision in scientific problems can reduce the need for labeled data, thereby improving data efficiency without compromising model generalizability. Additionally, we investigate the potential of transfer learning powered by transformers in scientific applications as a promising direction for further improving data efficiency. On the computational efficiency side, we present two efforts for increasing parameter efficiency of neural networks through novel architectures and structured network pruning.
- Are Particle-Based Methods the Future of Sampling in Joint Energy Models? A Deep Dive into SVGD and SGLDShah, Vedant Rajiv (Virginia Tech, 2024-08-19)This thesis investigates the integration of Stein Variational Gradient Descent (SVGD) with Joint Energy Models (JEMs), comparing its performance to Stochastic Gradient Langevin Dynamics (SGLD). We incorporated a generative loss term with an entropy component to enhance diversity and a smoothing factor to mitigate numerical instability issues commonly associated with the energy function in energy-based models. Experiments on the CIFAR-10 dataset demonstrate that SGLD, particularly with Sharpness-Aware Minimization (SAM), outperforms SVGD in classification accuracy. However, SVGD without SAM, despite its lower classification accuracy, exhibits lower calibration error underscoring its potential for developing well-calibrated classifiers required in safety-critical applications. Our results emphasize the importance of adaptive tuning of the SVGD smoothing factor ($alpha$) to balance generative and classification objectives. This thesis highlights the trade-offs between computational cost and performance, with SVGD demanding significant resources. Our findings stress the need for adaptive scaling and robust optimization techniques to enhance the stability and efficacy of JEMs. This thesis lays the groundwork for exploring more efficient and robust sampling techniques within the JEM framework, offering insights into the integration of SVGD with JEMs.
- Commonsense for Zero-Shot Natural Language Video LocalizationHolla, Meghana (Virginia Tech, 2023-07-07)Zero-shot Natural Language-Video Localization (NLVL) has shown promising results in training NLVL models solely with raw video data through dynamic video segment proposal generation and pseudo-query annotations. However, existing pseudo-queries lack grounding in the source video and suffer from a lack of common ground due to their unstructured nature. In this work, we investigate the effectiveness of commonsense reasoning in zero-shot NLVL. Specifically, we present CORONET, a zero-shot NLVL framework that utilizes commonsense information to bridge the gap between videos and generated pseudo-queries through a commonsense enhancement module. Our approach employs Graph Convolutional Networks (GCN) to encode commonsense information extracted from a knowledge graph, conditioned on the video, and cross-attention mechanisms to enhance the encoded video and pseudo-query vectors prior to localization. Through empirical evaluations on two benchmark datasets, we demonstrate that our model surpasses both zero-shot and weakly supervised baselines. These results underscore the significance of leveraging commonsense reasoning abilities in multimodal understanding tasks.
- Comparative Study of Body Doubling in Extended RealityAnnavarapu, Swetha (Virginia Tech, 2024-02-29)Body doubling is a mechanism that lets individuals work alongside someone on a monotonous task that they might not be able to focus on when they work alone. The person they work alongside is called a body double. It could be considered similar to co-working, but it gives individuals the freedom to work on anything that they want without feeling obligated to interact with the other person. This research aims to understand if body doubling is helpful to the users and how mixed reality body doubling can be a better addition to the existing mode of in-person and video-call based body doubling. In this work, we have recruited 40 participants to perform a user study where we have done a between-groups comparative study between a no body-double, in-person body double, a video-call based body double, and a mixed reality body double modes. Through these studies, we try to analyze if body doubling is helpful, and if so, which mode the participants are more inclined towards. The work also presents a few suggestions for future improvements.
- Concept Vectors for Zero-Shot Video GenerationDani, Riya Jinesh (Virginia Tech, 2022-06-09)Zero-shot video generation involves generating videos of concepts (action classes) that are not seen in the training phase. Even though the research community has explored conditional video generation for long high-resolution videos, zero-shot video remains a fairly unexplored and challenging task. Most recent works can generate videos for action-object or motion-content pairs, where both the object (content) and action (motion) are observed separately during training, yet results often lack spatial consistency between foreground and background and cannot generalize to complex scenes with multiple objects or actions. In this work, we propose Concept2Vid that generates zero-shot videos for classes that are completely unseen during training. In contrast to prior work, our model is not limited to a predefined fixed set of class-level attributes, but rather utilizes semantic information from multiple videos of the same topic to generate samples from novel classes. We evaluate qualitatively and quantitatively on the Kinetics400 and UCF101 datasets, demonstrating the effectiveness of our proposed model.
- Controllable Visual SynthesisAlBahar, Badour A. Sh A. (Virginia Tech, 2023-06-08)Computer graphics has become an integral part of various industries such as entertainment (i.e.,films and content creation), fashion (i.e.,virtual try-on), and video games. Computer graphics has evolved tremendously over the past years. It has shown remarkable image generation improvement from low-quality, pixelated images with limited details to highly realistic images with fine details that can often be mistaken for real images. However, the traditional pipeline of rendering an image in computer graphics is complex and time- consuming. The whole process of creating the geometry, material, and textures requires not only time but also significant expertise. In this work, we aim to replace this complex traditional computer graphics pipeline with a simple machine learning model. This machine learning model can synthesize realistic images without requiring expertise or significant time and effort. Specifically, we address the problem of controllable image synthesis. We propose several approaches that allow the user to synthesize realistic content and manipulate images to achieve their desired goals with ease and flexibility.
- Data Sharing and Retrieval of Manufacturing ProcessesSeth, Avi (Virginia Tech, 2023-03-28)With Industrial Internet, businesses can pool their resources to acquire large amounts of data that can then be used in machine learning tasks. Despite the potential to speed up training and deployment and improve decision-making through data-sharing, rising privacy concerns are slowing the spread of such technologies. As businesses are naturally protective of their data, this poses a barrier to interoperability. While previous research has focused on privacy-preserving methods, existing works typically consider data that is averaged or randomly sampled by all contributors rather than selecting data that are best suited for a specific downstream learning task. In response to the dearth of efficient data-sharing methods for diverse machine learning tasks in the Industrial Internet, this work presents an end-to end working demonstration of a search engine prototype built on PriED, a task-driven data-sharing approach that enhances the performance of supervised learning by judiciously fusing shared and local participant data.
- Data-driven Algorithms for Critical Detection Problems: From Healthcare to Cybersecurity DefensesSong, Wenjia (Virginia Tech, 2025-01-16)Machine learning and data-driven approaches have been widely applied to critical detection problems, but their performance is often hindered by data-related challenges. This dissertation seeks to address three key challenges: data imbalance, scarcity of high-quality labels, and excessive data processing requirements, through studies in healthcare and cybersecurity. We study healthcare problems with imbalanced clinical datasets that lead to performance disparities across prediction classes and demographic groups. We systematically evaluate these disparities and propose a Double Prioritized (DP) bias correction method that significantly improves the model performance for underrepresented groups and reduces biases. Cyber threats, such as ransomware and advanced persistent threats (APTs), have presented growing threats in recent years. Existing ransomware defenses often rely on black-box models trained on unverified traces, providing limited interpretability. To address the scarcity of reliably labeled training data, we experimentally profile runtime ransomware behaviors of real-world samples and identify core patterns, enabling explainable and trustworthy detection. For APT detection, the large size of system audit logs hinders real-time detection. We introduce Madeline, a lightweight system that efficiently processes voluminous logs with compact representations, overcoming real-time detection bottlenecks. These contributions provide deployable and effective solutions, offering insights for future research within and beyond the fields of healthcare and cybersecurity.
- Deep Convolutional Neural Networks for Segmenting Unruptured Intracranial Aneurysms from 3D TOF-MRA ImagesBoonaneksap, Surasith (Virginia Tech, 2022-02-07)Despite facing technical issues (e.g., overfitting, vanishing and exploding gradients), deep neural networks have the potential to capture complex patterns in data. Understanding how depth impacts neural networks performance is vital to the advancement of novel deep learning architectures. By varying hyperparameters on two sets of architectures with different depths, this thesis aims to examine if there are any potential benefits from developing deep networks for segmenting intracranial aneurysms from 3D TOF-MRA scans in the ADAM dataset.
- Deep Learning for Code Generation using Snippet Level Parallel DataJain, Aneesh (Virginia Tech, 2023-01-05)In the last few years, interest in the application of deep learning methods for software engineering tasks has surged. A variety of different approaches like transformer based methods, statistical machine translation models, models inspired from natural language settings have been proposed and shown to be effective at tasks like code summarization, code synthesis and code translation. Multiple benchmark data sets have also been released but all suffer from one limitation or the other. Some data sets only support a select few programming languages while others support only certain tasks. These limitations restrict researchers' ability to be able to perform thorough analyses of their proposed methods. In this work we aim to alleviate some of the limitations faced by researchers who work in the paradigm of deep learning applications for software engineering tasks. We introduce a large, parallel, multi-lingual programming language data set that supports tasks like code summarization, code translation, code synthesis and code search in 7 different languages. We provide benchmark results for the current state of the art models on all these tasks and we also explore some limitations of current evaluation metrics for code related tasks. We provide a detailed analysis of the compilability of code generated by deep learning models because that is a better measure of ascertaining usability of code as opposed to scores like BLEU and CodeBLEU. Motivated by our findings about compilability, we also propose a reinforcement learning based method that incorporates code compilability and syntax level feedback as rewards and we demonstrate it's effectiveness in generating code that has less syntax errors as compared to baselines. In addition, we also develop a web portal that hosts the models we have trained for code translation. The portal allows translation between 42 possible language pairs and also allows users to check compilability of the generated code. The intent of this website is to give researchers and other audiences a chance to interact with and probe our work in a user-friendly way, without requiring them to write their own code to load and inference the models.
- Deep Multi-Resolution Operator Networks (DMON): Exploring Novel Data-Driven Strategies for Chaotic Inverse ProblemsDonald, Sam Alexander Knowles (Virginia Tech, 2024-01-11)Inverse problems, foundational in applied sciences, involve deducing system inputs from specific output observations. These problems find applications in diverse domains such as aerospace engineering, weather prediction, and oceanography. However, their solution often requires complex numerical simulations and substantial computational resources. Modern machine learning based approaches have emerged as an alternative and flexible methodology for solving these types of problems, however their generalization power often comes at the cost of working with large descriptive datasets, a requirement that many applications cannot afford. This thesis proposes and explores the novel Deep Multi-resolution Operator Network (DMON), inspired by the recently developed DeepONet architecture. The DMON model is designed to solve inverse problems related to chaotic non-linear systems with low-resolution data through intelligently utilizing high-resolution data from a similar system. Performance of the DMON model and the proposed selection mechanisms are evaluated on two chaotic systems, a double pendulum and turbulent flow around a cylinder, with improvements observed under idealized scenarios whereby high and low-resolution inputs are manually paired, along with minor improvements when this pairing is conducted through the proposed the latent space comparison selection mechanism.
- Deidentification of Face Videos in Naturalistic Driving ScenariosThapa, Surendrabikram (Virginia Tech, 2023-09-05)The sharing of data has become integral to advancing scientific research, but it introduces challenges related to safeguarding personally identifiable information (PII). This thesis addresses the specific problem of sharing drivers' face videos for transportation research while ensuring privacy protection. To tackle this issue, we leverage recent advancements in generative adversarial networks (GANs) and demonstrate their effectiveness in deidentifying individuals by swapping their faces with those of others. Extensive experimentation is conducted using a large-scale dataset from ORNL, enabling the quantification of errors associated with head movements, mouth movements, eye movements, and other human factors cues. Additionally, qualitative analysis using metrics such as PERCLOS (Percentage of Eye Closure) and human evaluators provide valuable insights into the quality and fidelity of the deidentified videos. To enhance privacy preservation, we propose the utilization of synthetic faces as substitutes for real faces. Moreover, we introduce practical guidelines, including the establishment of thresholds and spot checking, to incorporate human-in-the-loop validation, thereby improving the accuracy and reliability of the deidentification process. In addition to this, this thesis also presents mitigation strategies to effectively handle reidentification risks. By considering the potential exploitation of soft biometric identifiers or non-biometric cues, we highlight the importance of implementing comprehensive measures such as robust data user licenses and privacy protection protocols.
- Developing a Computational Pipeline for Detecting Multi-Functional Antibiotic Resistance Genes in Metagenomics DataDang, Ngoc Khoi (Virginia Tech, 2022-06-09)Antibiotic resistance is currently a global threat spanning clinical, environmental, and geopolitical research domains. The environment is increasingly recognized as a key node in the spread of antibiotic resistance genes (ARGs), which confer antibiotic resistance to bacteria. Detecting ARGs in the environment is the first step in monitoring and controlling antibiotic resistance. In recent years, next-generation sequencing of environmental samples (metagenomic sequencing data) has become a prolific tool for the field of surveillance. Metagenomic data are nucleic acid sequences, or nucleotides, of environmental samples. Metagenomic sequencing data has been used over the years to detect and analyze ARGs. An intriguing instance of ARGs is the multi-functional ARG, where one ARG encodes two or more different antibiotic resistance functions. Multi-functional ARGs provide resistance to two or more antibiotics, thus should have evolutionary advantage over ARGs with resistance to single antibiotic. However, there is no tool readily available to detect these multi-functional ARGs in metagenomic data. In this study, we develop a computational pipeline to detect multi-functional ARGs in metagenomic data. The pipeline takes raw metagenomic data as the input and generates a list of potential multi-functional ARGs. A plot for each potential multi-functional ARG is also created, showing the location of the multi-functionalities in the sequence and the sequencing coverage level. We collected samples from three different sources: influent samples of a wastewater treatment plant, hospital wastewater samples, and reclaimed water samples, ran the pipeline, and identified 19, 57, and 8 potentially bi-functional ARGs in each source, respectively. Manual inspection of the results identified three most likely bi-functional ARGs. Interestingly, one bi-functional ARG, encoding both aminoglycoside and tetracycline resistance, appeared in all three data sets, indicating its prevalence in different environments. As the amount of antibiotics keeps increasing in the environment, multi-functional ARGs might become more and more common. The pipeline will be a useful computational tool for initial screening and identification of multi-functional ARGs in metagenomic data.
- Digital Phenotyping and Genomic Prediction Using Machine and Deep Learning in Animals and PlantsBi, Ye (Virginia Tech, 2024-10-03)This dissertation investigates the utility of deep learning and machine learning approaches for livestock management and quantitative genetic modeling of rice grain size under climate change. Monitoring the live body weight of animals is crucial to support farm management decisions due to its direct relationship with animal growth, nutritional status, and health. However, conventional manual weighing methods are time consuming and can cause potential stress to animals. While there is a growing trend towards the use of three-dimensional cameras coupled with computer vision techniques to predict animal body weight, their validation with deep learning models as well as large-scale data collected in commercial environments is still limited. Therefore, the first two research chapters show how deep learning-based computer vision systems can enable accurate live body weight prediction for dairy cattle and pigs. These studies also address the challenges of managing large, complex phenotypic data and highlight the potential of deep learning models to automate data processing and improve prediction accuracy in an industry-scale commercial setting. The dissertation then shifts the focus to crop resilience, particularly in rice, where the asymmetric increase in average nighttime temperatures relative to the increase in average daytime temperatures due to climate change is reducing grain yield and quality in rice. Through the use of deep learning and machine learning models, the last two chapters explore how metabolic data can be used in quantitative genetic modeling in rice under environmental stress conditions such as high night temperatures. These studies showed that the integration of metabolites and genomics provided an improvement in the prediction of rice grain size-related traits, and certain metabolites were identified as potential candidates for improving multi-trait genomic prediction. Further research showed that metabolic accumulation was low to moderately heritable, and genomic prediction accuracies were consistent with expected genomic heritability estimates. Genomic correlations between control and high night temperature conditions indicated genotype-by-environment interactions in metabolic accumulation and the effectiveness of genomic prediction models for metabolic accumulation varied across metabolites. Joint analysis of multiple metabolites improved the accuracy of genomic prediction by exploiting correlations between metabolite accumulation. Overall, this dissertation highlights the potential of integrating digital technologies and multi-omic data to advance data analytics in agriculture, with applications in livestock management and quantitative genetic modeling of rice.
- Evaluating Trust in AI-Assisted Bridge Inspection through VRPathak, Jignasu Yagnesh (Virginia Tech, 2024-01-29)The integration of Artificial Intelligence (AI) in collaborative tasks has gained momentum, with particular implications for critical infrastructure maintenance. This study examines the assurance goals of AI—security, explainability, and trustworthiness—within Virtual Reality (VR) environments for bridge maintenance. Adopting a within-subjects design approach, this research leverages VR environments to simulate real-world bridge maintenance scenarios and gauge user interactions with AI tools. With the industry transitioning from paper-based to digital bridge maintenance, this investigation underscores the imperative roles of security and trust in adopting AI-assisted methodologies. Recent advancements in AI assurance within critical infrastructure highlight its monumental role in ensuring safe, explainable, and trustworthy AI-driven solutions.
- Explainable Neural Claim Verification Using RationalizationGurrapu, Sai Charan (Virginia Tech, 2022-06-15)The dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled language models to generate high-quality text at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a significant challenge. Although claim verification techniques exist, they lack proper explainability. Numerical scores such as Attention and Lime and visualization techniques such as saliency heat maps are insufficient because they require specialized knowledge. It is inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation.
- Few-Shot and Zero-Shot Learning for Information ExtractionGong, Jiaying (Virginia Tech, 2024-05-31)Information extraction aims to automatically extract structured information from unstructured texts. Supervised information extraction requires large quantities of labeled training data, which is time-consuming and labor-intensive. This dissertation focuses on information extraction, especially relation extraction and attribute-value extraction in e-commerce, with few labeled (few-shot learning) or even no labeled (zero-shot learning) training data. We explore multi-source auxiliary information and novel learning techniques to integrate semantic auxiliary information with the input text to improve few-shot learning and zero-shot learning. For zero-shot and few-shot relation extraction, the first method explores the existing data statistics and leverages auxiliary information including labels, synonyms of labels, keywords, and hypernyms of name entities to enable zero-shot learning for the unlabeled data. We build an automatic hypernym extraction framework to help acquire hypernyms of different entities directly from the web. The second method explores the relations between seen classes and new classes. We propose a prompt-based model with semantic knowledge augmentation to recognize new relation triplets under the zero-shot setting. In this method, we transform the problem of zero-shot learning into supervised learning with the generated augmented data for new relations. We design the prompts for training using the auxiliary information based on an external knowledge graph to integrate semantic knowledge learned from seen relations. The third work utilizes auxiliary information from images to enhance few-shot learning. We propose a multi-modal few-shot relation extraction model that leverages both textual and visual semantic information to learn a multi-modal representation jointly. To supplement the missing contexts in text, this work integrates both local features (object-level) and global features (pixel-level) from different modalities through image-guided attention, object-guided attention, and hybrid feature attention to solve the problem of sparsity and noise. We then explore the few-shot and zero-shot aspect (attribute-value) extraction in the e-commerce application field. The first work studies the multi-label few-shot learning by leveraging the auxiliary information of anchor (label) and category description based on the prototypical networks, where the hybrid attention helps alleviate ambiguity and capture more informative semantics by calculating both the label-relevant and query-related weights. A dynamic threshold is learned by integrating the semantic information from support and query sets to achieve multi-label inference. The second work explores multi-label zero-shot learning via semi-inductive link prediction of the heterogeneous hypergraph. The heterogeneous hypergraph is built with higher-order relations (generated by the auxiliary information of user behavior data and product inventory data) to capture the complex and interconnected relations between users and the products.
- FixEval: Execution-based Evaluation of Program Fixes for Competitive Programming ProblemsHaque, Md Mahim Anjum (Virginia Tech, 2023-11-14)In a software life-cycle Source code repositories serve as vast storage areas for program code, ensuring its maintenance and version control throughout the development process. It is not uncommon for these repositories to house programs with hidden errors, which only manifest under specific input conditions, causing the program to deviate from its intended functionality. The growing intricacy of software design has amplified the time and resources required to pinpoint and rectify these issues. These errors, often unintended by developers, can be challenging to identify and correct. While there are techniques to auto-correct faulty code, the expansive realm of potential solutions for a single bug means there's a scarcity of tools and datasets for effective evaluation of the corrected code. This study presents FIXEVAL, a benchmark that includes flawed code entries from competitive coding challenges and their corresponding corrections. FIXEVAL offers an extensive test suite that not only gauges the accuracy of fixes generated by models but also allows for the assessment of a program's functional correctness. This suite further sheds light on time, memory limits, and acceptance based on specific outcomes. We utilize cutting-edge language models, trained on coding languages, as our reference point and juxtapose them using match-based (essentially token similarity) and execution-based (focusing on functional assessment) criteria. Our research indicates that while match-based criteria might not truly represent the functional precision of fixes generated by models, execution-based approaches offer a comprehensive evaluation tailored to the solution. Consequently, we posit that FIXEVAL paves the way for practical automated error correction and assessment of code generated by models. Dataset and models for all of our experiments are made publicly available at https://github.com/mahimanzum/FixEval.
- Geometric Deep Learning for Healthcare ApplicationsKarwande, Gaurang Ajit (Virginia Tech, 2023-06-06)This thesis explores the application of Graph Neural Networks (GNNs), a subset of Geometric Deep Learning methods, for medical image analysis and causal structure learning. Tracking the progression of pathologies in chest radiography poses several challenges in anatomical motion estimation and image registration as this task requires spatially aligning the sequential X-rays and modelling temporal dynamics in change detection. The first part of this thesis proposes a novel approach for change detection in sequential Chest X-ray (CXR) scans using GNNs. The proposed model CheXRelNet utilizes local and global information in CXRs by incorporating intra-image and inter-image anatomical information and showcases an increased downstream performance for predicting the change direction for a pair of CXRs. The second part of the thesis focuses on using GNNs for causal structure learning. The proposed method introduces the concept of intervention on graphs and attempts to relate belief propagation in Bayesian Networks (BN) to message passing in GNNs. Specifically, the proposed method leverages the downstream prediction accuracy of a GNN-based model to infer the correctness of Directed Acyclic Graph (DAG) structures given observational data. Our experimental results do not reveal any correlation between the downstream prediction accuracy of GNNs and structural correctness and hence indicate the harms of directly relating message passing in GNNs to belief propagation in BNs. Overall, this thesis demonstrates the potential of GNNs in medical image analysis and highlights the challenges and limitations of applying GNNs to causal structure learning.
- Hierarchical Bayesian Dataset SelectionZhou, Xiaona (Virginia Tech, 2024-05)Despite the profound impact of deep learning across various domains, supervised model training critically depends on access to large, high-quality datasets, which are often challenging to identify. To address this, we introduce Hierarchical Bayesian Dataset Selection (HBDS), the first dataset selection algorithm that utilizes hierarchical Bayesian modeling, designed for collaborative data-sharing ecosystems. The proposed method efficiently decomposes the contributions of dataset groups and individual datasets to local model performance using Bayesian updates with small data samples. Our experiments on two benchmark datasets demonstrate that HBDS not only offers a computationally lightweight solution but also enhances interpretability compared to existing data selection methods, by revealing deep insights into dataset interrelationships through learned posterior distributions. HBDS outperforms traditional non-hierarchical methods by correctly identifying all relevant datasets, achieving optimal accuracy with fewer computational steps, even when initial model accuracy is low. Specifically, HBDS surpasses its non-hierarchical counterpart by 1.8% on DIGIT-FIVE and 0.7% on DOMAINNET, on average. In settings with limited resources, HBDS achieves a 6.9% higher accuracy than its non-hierarchical counterpart. These results confirm HBDS's effectiveness in identifying datasets that improve the accuracy and efficiency of deep learning models when collaborative data utilization is essential.
- «
- 1 (current)
- 2
- 3
- »