Friday, April 13, 2007

SEO Case Study

Recently I took on a new client to assist in some search engine optimization. The site was receiving ZERO search engine traffic. For a specialized business, such as an industrial and insurance job recruiting company, search engine traffic can be very beneficial because it is so highly-targeted.
After taking a look at the web site, although it looked nice and was easy to navigate, I noticed it had some fundamental flaws that were preventing it from ranking well in the SERPs (search engine results pages). Please note, I did not design this web site - I am only recoding parts of it so it will rank better for particular search queries.
In this case study, I will outline the SEO flaws of the web site and present the solutions to these problems. On a side note, it makes no sense to me why a web developer would create a web site only to have it be completely ignored by the search engines.
If you have any basic understanding of
the benefits of SEO, it is relatively easy to design a well-optimized site from the start by using proper tags and integrating the keywords into the page copy. However, it seems quite often lately, I come across existing corporate web sites which are so terribly optimized for search engines that they not only contain major design and usability flaws, but are listed in the supplemental index and do not even rank on the first page for a search query of the company name!
Current Status of Search Engine Rankings
As of a few days ago, the company, which specializes in job recruitment for the industrial and insurance industries, was not even present in Google’s main search listings - it was being listed in the supplemental index.
For those of you who don’t know what the supplemental index is, it is basically Google’s trash can index where it deems pages as not search-worthy. An SEO’s worst fear is to have his or her site listed in the supplemental index. Luckily, if you understand the basic principles of SEO, you never have to worry about this.
What are the implications of being listed in the supplemental index? Well, a few days ago, if you typed in the company name into Google, the web site was not even listed on the first two pages. My goal is to get the site in the top 3 for that query and several other queries for various other keyword phrases.
It is no surprise, however, that the site was listed in the supplemental index, because based on its source code, Google probably had no clue what the web site was about. Here are the problems and solutions to better optimizing that web site, or any web site, for that matter, for search engine relevance:

Problems with the Original Page Design


No text-based copy on the home page. All of the page copy was image-based. Unless you write ALT tags for the images, search engines have no way of being able to read text embedded within an image. Besides, even ALT tags do not carry as much weight as actual page copy.
Image-based navigation needed to be converted to text-based. Again, search engines have no way of figuring out how to read image-based text. Not only that, but from a useability perspective, image-based navigations are not scalable when someone enlarges the font size from within a web browser.
Poor use of title and header tags. These are key HTML elements that search engines use to figure out the relevency of a web site. On the web site, there were no header tags used and the titles contained no keywords. The home page even contained the word “welcome” - a redundant word for SERPs. On a side note, I recently wrote an article about why the page title is so important.
Non-existent META Keyword and META Description tags. This is another no-no if you want a page to rank well in the SERPs. Although the META keywords tag is not used by Google, it is still used by some search engines. However, Google does take into account the META description tag when it shows SERPs for a query containing a particular keyword. You want to make sure you incorporate your keywords into your META description tag.
No backlinks. A web site needs a good number amount of backlinks for Google or any other search engine to trust the site and believe it is a credible source of information. This can be fixed by adding the web site to free and trusted directories such as
DMOZ or the Yahoo Directory.
Keywords not present in any page copy. Of course, the first thing to do is figure out what keywords you want the site to rank for, but after that, if the keywords are not present in the page copy, you will have a tough if not possible time of ranking for those keywords.
No sitemap. A sitemap is what Google and other search engines use to figure out what pages to crawl on a site. Every site needs a sitemap. I wrote a post a while back about
why a sitemap is your blog’s best friend.
These were just a few of the issues that were causing to not be ranking well in the SERPs. I have already fixed most of those major problems, with the exception of the header tags.
I expect that within the next couple of weeks as I add the site to more targeted directories and further refine the page copy to contain the desired keywords, the site will receive even more search engine traffic and start to rank very well for the desired keywords.
I will keep you updated over the next few weeks as the rankings for the company continue to improve.
If you are interested in learning more about SEO, the best place to start is by reading Aaron Wall’s SEO Book. Aaron Wall is basically known as the Michael Jordan of SEO and his eBook is truly one of the only ebooks worth reading. SEO really is not that difficult and every web developer should take it into consideration when developing new web sites.

No comments: