Crawling
Crawling is where it all begins – the acquisition of data about a website. This involves scanning the site and getting a complete list of everything on there – the page title, images, keywords it contains, and any other pages it links to – at a bare minimum. Modern crawlers may cache a copy of the whole page, as well as look for some additional information such as the page layout, where the advertising units are, where the links are on the page (featured prominently in the article text, or hidden in the footer?).
How is a website crawled exactly? An automated bot – a spider – visits each page, just like you or I would, only very quickly. Even in the earliest days, Google reported that they were reading a few hundred pages a second. If you’d like to learn how to make your own basic web crawler in PHP – it was one of the first articles I wrote here and well worth having a go at (just don’t expect to make the next Google).
Indexing
You’d be forgiven for thinking this is an easy step – indexing is the process of taking all of that data you have from a crawl, and placing it in a big database. Imagine trying to a make a list of all the books you own, their author and the number of pages. Going through each book is the crawl and writing the list is the index. But now imagine it’s not just a room full of books, but every library in the world. That’s pretty much a small-scale version of what Google does.
Ranking & Retrieval
The last step is what you see – you type in a search query, and the search engine attempts to display the most relevant documents it finds that match your query. This is the most complicated step, but also the most relevant to you or I, as web developers and users. It is also the area in which search engines differentiate themselves (though, there was some evidence that Bing was actually copying some Google results). Some work with keywords, some allow you to ask a question, and some include advanced features like keyword proximity or filtering by age of content.
No comments:
Post a Comment