Wang, LiFox, Edward A.2013-06-192013-06-192002http://hdl.handle.net/10919/20052As the World Wide Web grows rapidly, a web search engine is needed for people to search through the Web. The crawler is an important module of a web search engine. The quality of a crawler directly affects the searching quality of such web search engines. Given some seed URLs, the crawler should retrieve the web pages of those URLs, parse the HTML files, add new URLs into its buffer and go back to the first phase of this cycle. The crawler also can retrieve some other information from the HTML files as it is parsing them to get the new URLs. This paper describes the design, implementation, and some considerations of a new crawler programmed as an learning exercise and for possible use for experimental studies.application/pdfenIn CopyrightInformation retrievalCrawling on the World Wide WebTechnical reportTR-02-10http://eprints.cs.vt.edu/archive/00000572/01/LiWangReportAccept.pdf