Browsing by Author "Han, Jiawei"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Accepted Tutorials at The Web Conference 2022Tommasini, Riccardo; Basu Roy, Senjuti; Wang, Xuan; Wang, Hongwei; Ji, Heng; Han, Jiawei; Nakov, Preslav; Da San Martino, Giovanni; Alam, Firoj; Schedl, Markus; Lex, Elisabeth; Bharadwaj, Akash; Cormode, Graham; Dojchinovski, Milan; Forberg, Jan; Frey, Johannes; Bonte, Pieter; Balduini, Marco; Belcao, Matteo; Della Valle, Emanuele; Yu, Junliang; Yin, Hongzhi; Chen, Tong; Liu, Haochen; Wang, Yiqi; Fan, Wenqi; Liu, Xiaorui; Dacon, Jamell; Lye, Lingjuan; Tang, Jiliang; Gionis, Aristides; Neumann, Stefan; Ordozgoiti, Bruno; Razniewski, Simon; Arnaout, Hiba; Ghosh, Shrestha; Suchanek, Fabian; Wu, Lingfei; Chen, Yu; Li, Yunyao; Liu, Bang; Ilievski, Filip; Garijo, Daniel; Chalupsky, Hans; Szekely, Pedro; Kanellos, Ilias; Sacharidis, Dimitris; Vergoulis, Thanasis; Choudhary, Nurendra; Rao, Nikhil; Subbian, Karthik; Sengamedu, Srinivasan; Reddy, Chandan; Victor, Friedhelm; Haslhofer, Bernhard; Katsogiannis- Meimarakis, George; Koutrika, Georgia; Jin, Shengmin; Koutra, Danai; Zafarani, Reza; Tsvetkov, Yulia; Balachandran, Vidhisha; Kumar, Sachin; Zhao, Xiangyu; Chen, Bo; Guo, Huifeng; Wang, Yejing; Tang, Ruiming; Zhang, Yang; Wang, Wenjie; Wu, Peng; Feng, Fuli; He, Xiangnan (ACM, 2022-04-25)This paper summarizes the content of the 20 tutorials that have been given at The Web Conference 2022: 85% of these tutorials are lecture style, and 15% of these are hands on.
- OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity TypingKomarlu, Tanay; Jiang, Minhao; Wang, Xuan; Han, Jiawei (ACM, 2024-08-25)Fine-grained entity typing (FET), which assigns entities in text with context-sensitive, fine-grained semantic types, is a basic but important task for knowledge extraction from unstructured text. FET has been studied extensively in natural language processing and typically relies on human-annotated corpora for training, which is costly and difficult to scale. Recent studies explore the utilization of pre-trained language models (PLMs) as a knowledge base to generate rich and context-aware weak supervision for FET. However, a PLM still requires direction and guidance to serve as a knowledge base as they often generate a mixture of rough and fine-grained types, or tokens unsuitable for typing. In this study, we vision that an ontology provides a semantics-rich, hierarchical structure, which will help select the best results generated by multiple PLM models and head words. Specifically, we propose a novel annotation-free, ontology-guided FET method, OntoType, which follows a type ontological structure, from coarse to fine, ensembles multiple PLM prompting results to generate a set of type candidates, and refines its type resolution, under the local context with a natural language inference model. Our experiments on the Ontonotes, FIGER, and NYT datasets using their associated ontological structures demonstrate that our method outperforms the state-of-the-art zero-shot fine-grained entity typing methods as well as a typical LLM method, ChatGPT. Our error analysis shows that refinement of the existing ontology structures will further improve fine-grained entity typing.