Search engines are specialised kinds of databases which contain software programs (called spiders or robots) that discover websites on the web and then order them based on importance to a specific search query.
Search engines constantly crawl the web by making use of these spiders to index data in their database. Information is also cross-referenced between websites (which are created by links) to create a spider web of data in search engine databases.
As search engines crawl websites, they utilise special mathematical formulas called search algorithms, which help to organise, file and rank information in order to relevance to a particular search query. Each search engine uses different search methodologies and will therefore not necessarily return the same search results. In order for search engines to know where to file websites, they need to be able to access the content of the website.
How to search engines rank pages?
There are three fundamental processes in delivering search results to you:
- Crawling: Does Google know about your website and can they find it?
- Indexing: Can Google index your site?
- Serving: Does the website have good and useful content and is it relevant to a user’s search?
The challenge that many websites experience is that they are not search engine friendly. Google, for example, may not be able to find the title or description of your site which means that it doesn’t know what your site is about. Alternatively, links could be broken, pages might not be filed correctly and your navigation may not be optimised for search engine spiders. This is where search engine optimisation plays a role as by making small modifications to parts of your website the user experience that people have on your site and the performance of your website in organic searches may increase dramatically.