Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Methodologies for crawler based Web surveys

Methodologies for crawler based Web surveys There have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Internet Research Emerald Publishing

Methodologies for crawler based Web surveys

Internet Research , Volume 12 (2): 15 – May 1, 2002

Loading next page...
 
/lp/emerald-publishing/methodologies-for-crawler-based-web-surveys-VI5mUn0Yvk
Publisher
Emerald Publishing
Copyright
Copyright © 2002 MCB UP Ltd. All rights reserved.
ISSN
1066-2243
DOI
10.1108/10662240210422503
Publisher site
See Article on Publisher Site

Abstract

There have been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be well-known to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.

Journal

Internet ResearchEmerald Publishing

Published: May 1, 2002

Keywords: Surveys; Indexes

References