Search Engine

From Wiki

Jump to: navigation, search

Image:Search-enigne-market-share.jpg

A web search engine is designed to search for information on the World Wide Web and FTP servers.

The search results are generally presented in a list of results relevant to the user.

The most popular search engine in the world is Google.

Theme Zoom co-inventor Russell Wright was mentored by search engine retrieval expert Bruce Clay, of Bruce Clay Inc. It was Bruce Clay's early work on what was called website silos that inspired the creation of Theme Zoom version 1.0. If Bruce Clay was the first to speak about the effectiveness of Website Silo Architecture to the SEO community, Sue Bell of Theme Zoom LLC was the first to automate the process. This is a very difficult and expensive technical feat.

Of the many high quality educational courses, tools and principles Bruce Clay provided the global SEO community, perhaps the most popular is The Search Engine Relationship Chart (tm). In order to understand how search data is created, stored, retrieved and distributed through the different search engine companies, we suggest that you view the Search Engine Relationship Chart on the Bruce Clay website.


A Brief History of Search:

Please see the Web Search page of Wikipedia for a deeper analysis of this topic.

During the early development of the web, there was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[2] As more webservers went online the central list could not keep up. On the NCSA site new servers were announced under the title "What's New!"

The very first tool used for searching on the Internet was Archie. The name stands for "archive" without the "v". It was created in 1990 by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites since the amount of data was so limited it could be readily searched manually.


How Web Search Engines Work:

High-level architecture of a standard Web crawler

A search engine operates, in the following order

Web crawling
Indexing
Searching.

Web search engines work by storing information about many web pages, which they retrieve from the html itself.

These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link on the site.

Exclusions can be made by the use of robots.txt.

The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries.

A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.

When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. Unfortunately, there are currently no known public search engines that allow documents to be searched by date. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com.

The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This second form relies much more heavily on the computer itself to do the bulk of the work.

Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.

For more information on this topic, please visit Theme Zoom Secret Labs


Also See Network Empire's Content Curation Course - Cash Micro-Content Curation, Syndication and Publishing at the Speed of Thought



Personal tools