Sure about this likings!

The Lucky Thirteen - The Critical SEO Checklist

When it comes to SEO not all of us have the time to be experts. At some point the real "gurus" of SEO and other topics are the people with a whole lot of time on their hands. This líst, put together with the everyday webmaster in mind, drives home some absolutely crucial points that you should keep in mind when optimizing your pages for valuable search rankings.

1. Chëck Search Engine Crawl Error Pages

It's important to monitor search engine crawl error reports to keep on top of how your site and its pages are performing. Monitoring error reports can help you determine when and where Googlebot or another crawler is having trouble indexing your content - which can help you find a solution to the problem.


2. Create/update robots.txt and sitemap files

These files are supported by major search engines and are incredibly useful tools for ensuring that crawlers index your important site content while avoiding those sections/files that you deem to be either unimportant or cause problems in the crawl process. In many cases we've seen the proper use of these files make all the difference between a total crawl failure for a site and a full index of content pages which makes them crucial from an SEO standpoint.

3. Chëck Googlebot activity reports

These reports allow you to monitor how long it's taking Googlebot to access your pages. This information can be very important if you are worried that you may be on a slow network or experiencing web server problems. If it is taking search engine crawlers a long time to index your pages it may be the case that there are times when they "time out" and stop trying. Additionally, if the crawlers are unable to call your pages up quickly there is a good chance users are experiencing the same lag in load times, and we all know how impatient internet users can be.

4. Chëck how your site looks to browsers without image and JavaScrípt support

One of the best ways to determine just what your site looks like to a search engine crawler is to view your pages in a browser with image and JavaScrípt support disabled. Mozilla's Firefox browser has a plug-in available called the "Web Developer Toolbar" that adds this functionality and a lot more to the popular standards-compliant browser. If after turning off image and JavaScrípt support you aren't able to make sense of your pages at all, it is a good sign that your site is not well-optimized for search. While images and JavaScrípt can add a lot to the user experience they should always be viewed as a "luxury" - or simply an improvement upon an already-solid textual content base.

5. Ensure that all navigation is in HTML, not images

One of the most common mistakes in web design is to use images for site navigation. While for some companies and webmasters SEO is not a concern and therefore they can get away with this, for anyone worried about having well-optimized pages this should be the first thing to go. Not only will it render your site navigation basically valueless for search engine crawlers, but within reason very similar effects can usually be achieved with CSS roll-overs that maintain the aesthetic impact while still providing valuable and relevant link text to search engines.

6. Chëck that all images include ALT text

Failing to include descriptive ALT text with images is to miss out on another place to optimize your pages. Not only is this important for accessibility for vision-impaired users, but search engines simply can't "take a look" at your images and decipher the content there. They can only see your ALT text, if you've provided it, and the association they'll make with the image and your relevant content will be based exclusively on this attribute.

7. Use Flash content sparingly

Several years ago Flash hit the scene and spread like wild fire. It was neat looking, quick to download and brought interactivity and animation on the web to a new height. However, from an SEO standpoint, Flash files might as well be spacer GIFs - they're empty. Search engines are not able to index text/content within a Flash file. For this reason, while Flash can do a lot for presentation, from an accessibility and SEO standpoint it should be used very sparingly and only on non-crucial content.

and meta description tag8. Ensure that each page has a unique

tags is one of the most important on-page SEO points. Many webmasters are apparently unaware and use either duplicate tags for multiple pages or do not target search traffíc at all within this valuable tag. Run a search on a competitive keyword of your choice on Google - clíck on the first few links that show up and see what text appears in the title bar for the window. You should see right away that this is a key place to include target keywords for your pages.Optimization of

9. Make sure that important page elements are HTML

The simple fact to keep in mind when optimizing a page is that the crawlers are basically only looking at your source code. Anything you've put together in a Flash movie, an image or any other multimedia component is likely to be invisible to search engines. With that in mind it should be clear that the most important elements of your page, where the heart of your content will lie, should be presented in clean, standards-compliant and optimized HTML source code.

10. Be sure to target keywords in your page content

Some webmasters publish their pages in hopes that they will rank well for competitive keywords within their topic or niche. However, this will simply nevër happen unless you include your target keywords in the page content. This means creating well-optimized content that mentions these keywords frequently without triggering sp@m filters. Any way you cut it you're going to need to do some writing - if you don't like doing it yourself it's a good idea to hire a professional copy writer. Simply put: without relevant content that mentions your target keywords you will not rank well.

11. Don't use frames

There is still some debate as to whether frames are absolutely horrible for SEO or whether they are simply just not the best choice. Is there really a difference? Either way, you probably don't want to use frames. Crawlers can have trouble getting through to your content and effectively indexing individual pages, for one thing. For another, most functionality that the use of frames allows is easily duplicated using proper CSS coding. There is still some use for a frames-based layout, but it is still better to avoid it if at all possible.

12. Make sure that your server is returning a 404 error code for unfound pages

We've all seen it. We're browsing around at a new or familiar site, clicking links and reading content, when we get the infamous blank screen that reads "404 page not found" error. While broken links that point to these pages should definitely be avoided you also don't want to create a "custom error page" to replace this page. Why? Well, it's simple: if you generate a custom error page, crawlers can spend time following broken links that they won't know are broken. A 404 error page is easily recognizable, and search engine crawlers are programmed to stop following links that generate this page. If crawlers end up in a section of your site that is down through an old link that you missed, they might not spend the time to index the rest of your site.

13. Ensure that crawlers will not fall into infinite loops

Many webmasters see fit to include scripting languages, such as Perl, Php and Asp to add interactive functionality to their web pages. Whether for a calendar system, a forum, eCommerce functionality for an online store, etc. scripting is used quite frequently on the internet. However, what some webmasters don't realize is that unless they use robots.txt files or take other preventative measures search engine crawlers can fall into what are called "infinite loops" in their pages. Imagine, if you will, a scrípt that allows a webmaster to add a calendar to one of his pages. Now, any programmer worth his salt would base this scrípt on calculations - it would auto-generate each page based on the previous month and a formula to determine how the days and dates would fall. That scrípt, depending on sophistication, could plausibly extend infinitely into the past or future. Now think of the way a crawler works - it follows links, indexes what it finds, and follows more links. What's to stop a crawler from clicking "next month" in a calendar scrípt an infinite number of times? Nothing - well, almost nothing. Crawlers are well-built programs that need to run efficiently. As such they are built to recognize when they've run into an "infinite loop" situation like this, and they will simply stop indexing pages at a site that is flagged for this error.

No comments:

Add to My Yahoo!                       
Custom Search