A Brief History of SEO

Anyone who used the internet in the 90’s knows that search engines have evolved in the past two decades. The term “Search Engine Optimization” (or SEO) is believed to have been coined by John Audette of the Multimedia Marketing Group in 1997, and it been big business ever since. In the early days of the internet, webmasters simply submitted their site’s URL to a search engine and the engine would send a spider to catalogue links on the page for indexing. Search engines relied primarily on key word density and meta tags, allowing webmasters to improve their rankings by “keyword stuffing”, or using the same words over and over again in page’s text. This, of course, led to websites with poor content skyrocketing to the top of search results.

Search engines had to improve their methodology in order to retain their users. Back Rub was one of the first search engines to implement complex mathematical algorithms that took into account the number of inbound links to a website, which its creators, Larry Page and Sergey Brin, dubbed “PageRank”. In 1998, Back Rub became Google, which still uses Page and Brin’s PageRank algorithm today.

Not to be outdone, webmasters soon found ways to manipulate PageRank, giving rise to an industry of buying and selling links in order to improve search rankings. The term “Link Farm” became the perforative name for websites that did nothing but host links.

By 2005, Google started began tracking individual users’ search histories and location to personalize search results. Their spiders also began searching for the “nofollow” HTML tag, which instructs the engine not to take a link into account when determining PageRank. Ideally, nofollow was meant to prevent spamming on blogs that allowed any user to post text, though webmasters soon saw the benefit of tagging all outbound links as nofollow in order to keep their competition’s PageRank down.

Google Panda
Google Penguin

In 2010, Google announced Google Instant, an algorithm that took time of publication into account, giving more recently updated sites (such as recent news articles) higher rankings. Over the past two years, Google has started to seriously crack down on webmasters who try to manipulate their algorithms with the Panda update in 2011 and the Penguin update in 2012. These measures penalize websites that duplicate content from other webpages or use other unfair tactics such as stuffing keywords in hidden text. Today, most search engines employ human quality control through companies like Leapforce. Leapforce agents rank webpages based on their relevance to given keywords, the quality and timeliness of their content, and their adherence to search engine guidelines.
It seems that every time search engines improve their algorithms, the SEO industry grows even stronger. This is probably a good thing, as websites are being judged more for the quality of their content rather than how well they can “trick” search engines. More and more webmasters are relying on outside help to ensure that their site ranks high without violating any guidelines that could get them banned altogether.