Browsing by Author "Wu, Lingfei"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Accepted Tutorials at The Web Conference 2022Tommasini, Riccardo; Basu Roy, Senjuti; Wang, Xuan; Wang, Hongwei; Ji, Heng; Han, Jiawei; Nakov, Preslav; Da San Martino, Giovanni; Alam, Firoj; Schedl, Markus; Lex, Elisabeth; Bharadwaj, Akash; Cormode, Graham; Dojchinovski, Milan; Forberg, Jan; Frey, Johannes; Bonte, Pieter; Balduini, Marco; Belcao, Matteo; Della Valle, Emanuele; Yu, Junliang; Yin, Hongzhi; Chen, Tong; Liu, Haochen; Wang, Yiqi; Fan, Wenqi; Liu, Xiaorui; Dacon, Jamell; Lye, Lingjuan; Tang, Jiliang; Gionis, Aristides; Neumann, Stefan; Ordozgoiti, Bruno; Razniewski, Simon; Arnaout, Hiba; Ghosh, Shrestha; Suchanek, Fabian; Wu, Lingfei; Chen, Yu; Li, Yunyao; Liu, Bang; Ilievski, Filip; Garijo, Daniel; Chalupsky, Hans; Szekely, Pedro; Kanellos, Ilias; Sacharidis, Dimitris; Vergoulis, Thanasis; Choudhary, Nurendra; Rao, Nikhil; Subbian, Karthik; Sengamedu, Srinivasan; Reddy, Chandan; Victor, Friedhelm; Haslhofer, Bernhard; Katsogiannis- Meimarakis, George; Koutrika, Georgia; Jin, Shengmin; Koutra, Danai; Zafarani, Reza; Tsvetkov, Yulia; Balachandran, Vidhisha; Kumar, Sachin; Zhao, Xiangyu; Chen, Bo; Guo, Huifeng; Wang, Yejing; Tang, Ruiming; Zhang, Yang; Wang, Wenjie; Wu, Peng; Feng, Fuli; He, Xiangnan (ACM, 2022-04-25)This paper summarizes the content of the 20 tutorials that have been given at The Web Conference 2022: 85% of these tutorials are lecture style, and 15% of these are hands on.
- Bilevel Optimization in the Deep Learning Era: Methods and ApplicationsZhang, Lei (Virginia Tech, 2024-01-05)Neural networks, coupled with their associated optimization algorithms, have demonstrated remarkable efficacy and versatility across an extensive array of tasks, encompassing image recognition, speech recognition, object detection, sentiment analysis, and more. The inherent strength of neural networks lies in their capability to autonomously learn intricate representations that map input data to corresponding output labels seamlessly. Nevertheless, not all tasks can be neatly encapsulated within the confines of an end-to-end learning paradigm. The complexity and diversity of real-world challenges necessitate innovative approaches that extend beyond conventional formulations. This calls for the exploration of specialized architectures and optimization strategies tailored to the unique intricacies of specific tasks, ensuring a more nuanced and effective solution to the myriad demands of diverse applications. The bi-level optimization problem stands out as a distinctive form of optimization, characterized by the embedding or nesting of one problem within another. Its relevance persists significantly in the current era dominated by deep learning. A notable instance of its application in the realm of deep learning is observed in hyperparameter optimization. In the context of neural networks, the automatic training of weights through backpropagation represents a crucial aspect. However, certain hyperparameters, such as the learning rate (lr) and the number of layers, must be predetermined and cannot be optimized through the conventional chain rule employed in backpropagation. This underscores the importance of bi-level optimization in addressing the intricate task of fine-tuning these hyperparameters to enhance the overall performance of deep learning models. The domain of deep learning presents a fertile ground for further exploration and discoveries in optimization. The untapped potential for refining hyperparameters and optimizing various aspects of neural network architectures highlights the ongoing opportunities for advancements and breakthroughs in this dynamic field. Within this thesis, we delve into significant bi-level optimization challenges, applying these techniques to pertinent real-world tasks. Given that bi-level optimization entails dual layers of optimization, we explore scenarios where neural networks are present in the upper-level, the inner-level, or both. To be more specific, we systematically investigate four distinct tasks: optimizing neural networks towards optimizing neural networks, optimizing attractors towards optimizing neural networks, optimizing graph structures towards optimizing neural network performance, and optimizing architecture towards optimizing neural networks. For each of these tasks, we formulate the problems using the bi-level optimization approach mathematically, introducing more efficient optimization strategies. Furthermore, we meticulously evaluate the performance and efficiency of our proposed techniques. Importantly, our methodologies and insights transcend the realm of bi-level optimization, extending their applicability broadly to various deep learning models. The contributions made in this thesis offer valuable perspectives and tools for advancing optimization techniques in the broader landscape of deep learning.
- Bridging the Gap between Spatial and Spectral Domains: A Unified Framework for Graph Neural NetworksChen, Zhiqian; Chen, Fanglan; Zhang, Lei; Ji, Taoran; Fu, Kaiqun; Zhao, Liang; Chen, Feng; Wu, Lingfei; Aggarwal, Charu; Lu, Chang-Tien (ACM, 2023-10)Deep learning's performance has been extensively recognized recently. Graph neural networks (GNNs) are designed to deal with graph-structural data that classical deep learning does not easily manage. Since most GNNs were created using distinct theories, direct comparisons are impossible. Prior research has primarily concentrated on categorizing existing models, with little attention paid to their intrinsic connections. The purpose of this study is to establish a unified framework that integrates GNNs based on spectral graph and approximation theory. The framework incorporates a strong integration between spatial- and spectral-based GNNs while tightly associating approaches that exist within each respective domain.