Search Engine Algorithm
An algorithm is nothing more than a set of rules, used by a search engines, to determine in which order search results will be listed. Returning to the example of “web designer,” there are over 5 million pages that contain that phrase. The two words do not necessarily need to appear in that order, or even appear right next to one another. Of course, it would not be fair to the searcher or the websites if the list was spit out at random. Listing them alphabetically would not make much sense either, although that technically would be considered the simplest form of an algoritm. (One can imagine web designers, in vying to appear at the top of the lists, coming up with names like AAAA Web Design, AAAAA Web Design, AAAAAA, ad infinitum!) Considering how much information there is on the Internet on virtually any topic, the absolute best deal for everyone involved is for search engines to return the most relevant sites at the top, and the least relevant sites at the bottom. This is basically what algorithms do.
A search engine algorithm takes the phrase you enter and “tests” all of the pages in its index according to a very long (and closely-guarded) series of rules that “rank” them according to relevancy. In the case of the search phrase “Web Designer,” the page that appears in the number 1 position is supposedly the most relevant, and the one that appears in the 5 million position is supposedly the least relevant.
The word “supposedly” is used to highlight a basic fact: relevancy is a human concept, one that is highly subjective. What is most relevant to one person will probably not be the most relevant to another. There are many immeasurable factors that go into it, and it is physically impossible for any computer, however powerful, to know them all. Nonetheless, those who have been writing search engine algorithms over the past several years have learned thousands of little tricks that help search engines make “educated guesses” at which pages might be the most useful. The algorithms are constantly being updated in such a way that, hopefully, the results are becoming increasingly accurate.
There's just one catch. Just as search engines are always learning new tricks, those who want to “beat the system” are learning them as well. Some might remember the days when one would type in a phrase such as “Web Designer” and get a completely unrelated page, trying to sell an entirely different service. The explanation for this could be found when the page was opened and at the top was a long list of any number of search phrases, or perhaps just: “web designer web designer web designer web designer...” etc. It is much less likely the search engines will be fooled by this today. Why? Well, everytime someone learns a new way to cheat the search engines, the search engine algorithm writers eventually discover it. (Actually the better the trick works, the sooner it will be found out!) Once they do, they more often than not add additional code to the algorithm that catches these tricks. Far from getting ranked high, the pages that are “caught” using underhanded tricks are now heavily penalized, or even banned from the index.
While it is true that the algorithms are constantly changing, some basic principles underly them all. Rather than teaching ways to “fool” the algorithms, Search-Engine-Site.com attempts to explain these larger patterns that will keep your site ranking high in the long run, not just the immediate future. Most of this has nothing to do with “secrets,” but the hard work involved in creating a site that truly is relevant, and has therefore gained a strong reputation among a large network of informative sites. In other words, we do not teach how to fight against the search engines for short-term gain. Instead, we attempt to explain the genuinely good-intentioned project that underlies search engine algorithms: giving helpful answers to the millions of questions asked by people all over the world, every single day!