Abstract:
A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Web crawling is an important method for collecting data on, and keeping up with, the rapidly expanding Internet. A vast number of web pages are continually being added every day, and information is constantly changing. This project is an overview of Web Crawler Supported Smart Search and the policies like searching, ranking, indexing involved in it.