Show simple item record

dc.contributor.authorFarag, Mohamed Magdy Gharib
dc.contributor.authorKhan, Mohammed Saquib Akmal
dc.contributor.authorMishra, Gaurav
dc.contributor.authorGanesh, Prasad Krishnamurthi
dc.date.accessioned2012-12-12T01:35:11Z
dc.date.available2012-12-12T01:35:11Z
dc.date.issued2012-12-11
dc.identifier.urihttp://hdl.handle.net/10919/19085
dc.descriptionThe Crisis, Tragedy, and Recovery network (CTRnet, see external link: http://www.ctrnet.net) project makes use of general purpose crawlers, like Heritrix (see the list of like packages on p. 12 of 'Lucene in Action').However, these crawlers are strongly influenced by the quality of the seeds used, as well as other configuration details that govern the crawl.Focused crawlers typically use extra information, related to the topic of the crawl, to decide which links to follow from any page being examined. Thus, they may be able to reduce noise, and increase precision, though this may reduce recall.Focused crawling about events is particularly challenging. This project aims to explore this problem, design and implement a prototype that will improve upon existing solutions, and demonstrate its effectiveness with regard to CTRnet efforts.
dc.description.abstractFinding information on WWW is difficult and challenging task because of the extremely large volume of the WWW. Search engine can be used to facilitate this task, but it is still difficult to cover all the webpages on the WWW and also to provide good results for all types of users and in all contexts. Focused crawling concept has been developed to overcome these difficulties. There are several approaches for developing a focused crawler. Classification-based approaches use classifiers in relevance estimation. Semantic-based approaches use ontologies for domain or topic representation and in relevance estimation. Link analysis approaches use text and link structure information in relevance estimation. The main differences between these approaches are: what policy is taken for crawling, how to represent the topic of interest, and how to estimate the relevance of webpages visited during crawling. We present in this report a modular architecture for focused crawling. We separated the design of the main components of focused crawling into modules to facilitate the exchange and integration of different modules. We will present here a classification-based focused crawler prototype based on our modular architecture.en_US
dc.language.isoen_USen_US
dc.subjectFocused Crawleren_US
dc.subjectCrawleren_US
dc.subjectNaive Bayes Classifieren_US
dc.subjectSupport Vector Machine Classifieren_US
dc.titleFocused Crawlingen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record