Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL / Libristo.pl
Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL

Kod: 05285152

Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL

Autor Sachin Gupta

Master's Thesis from the year 2014 in the subject Computer Science - Miscellaneous, , course: M.Tech, language: English, comment: Excellent , abstract: As the World Wide Web is growing rapidly day by day, the number of web pages i ... więcej

308.67


Dostępna u dostawcy
Wysyłamy za 14 - 18 dni
Dodaj do schowka

Zobacz książki o podobnej tematyce

Bon podarunkowy: Radość gwarantowana

Wzór bonu podarunkowegoDowiedz się więcej

Więcej informacji o Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL

Za ten zakup dostaniesz 178 punkty

Opis

Master's Thesis from the year 2014 in the subject Computer Science - Miscellaneous, , course: M.Tech, language: English, comment: Excellent , abstract: As the World Wide Web is growing rapidly day by day, the number of web pages is increasing into millions and trillions around the world. To make searching much easier for users, search engines came into existence. Web search engines are used to find specific information on the WWW. Without search engines, it would be almost impossible for us to locate anything on the Web unless or until we know a specific URL address. Every search engine maintains a central repository or databases of HTML documents in indexed form. Whenever a user query comes, searching is performed within that database of indexed web pages. The size of repository of every search engine can t accommodate each and every page available on the WWW. So it is desired that only the most relevant and important pages are stored in the database to increase the efficiency of search engines. This database of HTML documents is maintained by special software called Crawler . A Crawler is software that traverses the web and downloads web pages. Broad search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Since the Web is a distributed, dynamic and rapidly growing information resource, a crawler cannot download all pages. It is almost impossible for crawlers to crawl the whole web pages from World Wide Web. Crawlers crawls only fraction of web pages from World Wide Web. So a crawler should observe that the fraction of pages crawled must be most relevant and the most important ones, not just random pages. In our Work, we propose an extended architecture of web crawler of search engine, to crawl only relevant and important pages from WWW, which will lead to reduced sever overheads. With our proposed architecture we will also be optimizing the crawled data by removing least or never browsed pages data. The crawler needs a very large memory space of database for storing page content etc, by not storing irrelevant and unimportant pages and removing never accessed pages, we will be saving a lot of memory space that will eventually speed up the searches (queries) from the database. In our approach, we propose to use Weighted page Rank based on visits of links algorithm to sort the search results, which will reduce the search space for users, by providing mostly visited pages links on the top of search results list.

Szczegóły książki

Kategoria Książki po angielsku Computing & information technology Information technology: general issues

308.67

Ulubione w innej kategorii


250 000
zadowolonych klientów

Od roku 2008 obsłużyliśmy wielu miłośników książek, ale dla nas każdy był tym wyjątkowym.


Paczkomat 12,99 ZŁ 31975 punktów

Copyright! ©2008-24 libristo.pl Wszelkie prawa zastrzeżonePrywatnieCookies


Konto: Logowanie
Wszystkie książki świata w jednym miejscu. I co więcej w super cenach.

Koszyk ( pusty )

Kup za 299 zł i
zyskaj darmową dostawę.

Twoja lokalizacja: