VTechWorks staff will be away for the winter holidays starting Tuesday, December 24, 2024, through Wednesday, January 1, 2025, and will not be replying to requests during this time. Thank you for your patience, and happy holidays!
 

Focused Crawling

Abstract

Finding information on WWW is difficult and challenging task because of the extremely large volume of the WWW. Search engine can be used to facilitate this task, but it is still difficult to cover all the webpages on the WWW and also to provide good results for all types of users and in all contexts. Focused crawling concept has been developed to overcome these difficulties. There are several approaches for developing a focused crawler. Classification-based approaches use classifiers in relevance estimation. Semantic-based approaches use ontologies for domain or topic representation and in relevance estimation. Link analysis approaches use text and link structure information in relevance estimation. The main differences between these approaches are: what policy is taken for crawling, how to represent the topic of interest, and how to estimate the relevance of webpages visited during crawling. We present in this report a modular architecture for focused crawling. We separated the design of the main components of focused crawling into modules to facilitate the exchange and integration of different modules. We will present here a classification-based focused crawler prototype based on our modular architecture.

Description

The Crisis, Tragedy, and Recovery network (CTRnet, see external link: http://www.ctrnet.net) project makes use of general purpose crawlers, like Heritrix (see the list of like packages on p. 12 of 'Lucene in Action').However, these crawlers are strongly influenced by the quality of the seeds used, as well as other configuration details that govern the crawl.Focused crawlers typically use extra information, related to the topic of the crawl, to decide which links to follow from any page being examined. Thus, they may be able to reduce noise, and increase precision, though this may reduce recall.Focused crawling about events is particularly challenging. This project aims to explore this problem, design and implement a prototype that will improve upon existing solutions, and demonstrate its effectiveness with regard to CTRnet efforts.

Keywords

Focused Crawler, Crawler, Naive Bayes Classifier, Support Vector Machine Classifier

Citation