Enhancement in Crawling and Searching (Using Extended Weighted Page Rank Algorithm based on VOL)
Ms.Isha Mahajan, Ms. Harjinder Kaur, Dr. Darshan Kumar, ,
Affiliations Department of Computer Science & Engineering SSIET, Dinanagar - 143531, Distt. Gurdaspur, Punjab (India)
:10.22362/ijcert/2017/v4/i6/xxxx [UNDER PROCESS]
As the World Wide Web is becoming gigantic day by day, the number of web pages is increasing into billions around the world. To make searching much easier for users, search engines came into existence. Search engines are used to find specific information on the WWW. Without search engines, it would be almost impossible for us to locate anything on the Web unless or until we know a specific URL address. Every search engine maintains a central repository or databases of HTML documents in indexed form. Whenever a user query comes, searching is performed within that database of indexed web pages. The size of a repository of every search engine cannot keep each page available on the WWW. So it is desired that only the most relevant and important pages be stored in the database to increase the efficiency of search engines.
This search engine database is maintained by special software called â€œCrawler.â€ A Crawler is a software that traverses the web and downloads web pages. Web Crawlers are also known as â€œWeb Spiders,â€ â€œRobots,â€ â€œInternet Bots,â€ â€œAgentsâ€ and automatic Indexersâ€ etc. Broad search engines, as well as many more specialized search tools, rely on web crawlers to acquire large collections of pages for indexing and analysis. Since the Web is a distributed, dynamic and rapidly growing information resource, a crawler cannot download all pages. It is almost impossible for crawlers to crawl the whole web pages from World Wide Web. Crawlers crawl the only fraction of web pages from World Wide Web. So a crawler should observe that the fraction of pages crawled must be most relevant and the most important ones, not just random pages. The crawler is an important module of a search engine. The quality of a crawler directly affects the searching quality of search engines. In our Work, we propose to improve the crawling of a web crawler, to crawl only relevant and important pages from WWW, which will lead to reduced server overheads. With our proposed architecture we will also be optimizing the crawled data by removing least used or never browsed pages. The crawler needs a huge memory space or database for storing page content etc, by not storing irrelevant and unimportant pages and never removing accessed pages, we will be saving a lot of memory space that will eventually speed up the queries to the database. In our approach, we propose to use Extended Weighted page rank based on visits of links algorithm to sort the search results, which will reduce the search space for users, by providing mostly visited pages and most time devoted pages by the user on the top of search results list. Hence reducing search space for the user.
Isha Mahajan et.al, â€œEnhancement in Crawling and Searching(Using Extended Weighted Page Rank Algorithm based on VOL)â€, International Journal of Computer Engineering In Research Trends, 4(6):pp:202-230,June-2017.
Keywords : Web Crawler, Extended Weighted Page Rank based on Visits of links, Weighted Page Rank, Page Rank, Page Rank based on visit of links, Search Engine, Crawling, bot, Information Retrieval Engine, Page Reading Time, User Attention Time, World Wide Web, Inlinks, Outlines, Web informational retrieval, online search.
Authors are not required to pay any article-processing charges (APC) for their article to be published open access in Journal IJCERT. No charge is involved in any stage of the publication process, from administrating peer review to copy editing and hosting the final article on dedicated servers. This is free for all authors.
News & Events
Latest issue :Volume 10 Issue 1 Articles In press
☞ INVITING SUBMISSIONS FOR THE NEXT ISSUE :
☞ LAST DATE OF SUBMISSION : 31st March 2023
☞ SUBMISSION TO FIRST DECISION : In 7 Days
☞ FINAL DECISION : IN 3 WEEKS FROM THE DAY OF SUBMISSION
All the authors, conference coordinators, conveners, and guest editors kindly check their articles' originality before submitting them to IJCERT. If any material is found to be duplicate submission or sent to other journals when the content is in the process with IJCERT, fabricated data, cut and paste (plagiarized), at any stage of processing of material, IJCERT is bound to take the following actions.
1. Rejection of the article.
2. The author will be blocked for future communication with IJCERT if duplicate articles are submitted.
3. A letter regarding this will be posted to the Principal/Director of the Institution where the study was conducted.
4. A List of blacklisted authors will be shared among the Chief Editors of other prestigious Journals
We have been screening articles for plagiarism with a world-renowned tool: Turnitin However, it is only rejected if found plagiarized. This more stern action is being taken because of the illegal behavior of a handful of authors who have been involved in ethical misconduct. The Screening and making a decision on such articles costs colossal time and resources for the journal. It directly delays the process of genuine materials.