Sure about this likings!

Defend Website from Google Duplicate Proxy Exploit

How to Defend your Website from the Google Duplicate Proxy Exploit?
By Sophie White (c) 2007

There is a current and active way to knock a website out of Google's search engine results. It's simple and effective. This information is already in the public domain and the more people that know about it, the more likelihood there is that Google will do something about it. This article will tell you how it works, how to get a website knocked out of the search engine rankings, but most importantly, how to defend your own website from having it happen to you.

To understand this exploit, you must first understand about Google's Duplicate Content filter. It's simply described thus: Google doesn't want you to search for "blue widget" and have the top 10 search terms returned copies of the same article on how great blue widgets are. They want to give you ONE copy of the Great Blue Widget article, and 9 other different results, just on the off chance that you've already read that article and the other results are actually what you wanted.


To handle this, every time Google spiders and indexes a page, it checks it to see if it's already got a page that is predominantly the same, a duplicate page if you will. Exactly how Google works this out, nobody knows exactly, but it is going to be a combination of some or all of: page text length, page title, headings, keyword densities, checking exactly copy sentence fragments etc. As a result of this duplicate content filter, a whole industry has grown up around trying to get round the filter. Just search for "spin article".

Getting back to the story here, Google indexes a page and lets say it fails it's duplicate content check, what does Google do? These days, it dumps that duplicate page in Google's Supplemental Index. What, you didn't know that Google has 2 indexes? Well they do: the main one, and a supplemental one. Two things are important here: Google will always return results from their Main index if they can; and they will only go to the Supplemental index if they don't get enough joy from their main index. What this means is that if your page is in the supplemental index, it's almost certain that you will never show up in the Search Engine Ranking Pages, unless there is next to no competition for the phrase that was searched for.

This all seems pretty reasonable to me, so what's the problem? Well there's another little step I haven't mentioned yet. What happens if someone copies your page, let's say your homepage of your business website, and when Google indexes that copy, it correctly determines that it's a duplicate. Now Google knows about 2 pages that it knows are duplicates, it has to decide which to dump in the supplemental index, and which to keep in the main one. That's pretty obvious right? But how does Google know which is the original and which is the copy? They don't. Sure they have some clever algorithms to work it out, but even if they are 99% accurate, that leaves a lot of problems for that 1% of times they can get it wrong!

And this is the heart of the exploit, if someone copies your website's homepage say, and manages to convince Google that *their* page is the original, your homepage will get tossed into the supplemental index, never to see the light of day in the Search Engine Ranking Pages again. In case I'm not being clear enough, that's bad! But wait, it gets worse:

It's fair to say that in the case of a person physically copying your page and hostíng it, you can often get them to take it down through the use of copyright lawyers, and cease and desist letters to ISP's and the like, with a quick "Reinclusion Request" to Google. But recently there's a new threat that's a whole lot harder to stop: the use of publicly accessible Proxy websites. (If you don't know what a Proxy is, it's basically a way of making the web run faster by caching content more local to your internet destination. In principle, they are generally a good thing.)

There are many such web proxies out there, and I won't líst any here, however I will describe the process: they send out spiders (much like Google's) and they spider your page, take your content, then they host a copy of your website on their proxy site, nominally so that when their users request your page, they can serve up their local copy quickly rather than having to retrieve if off your server. The big issue is that Google can sometimes decide that the proxy copy of your web page is the original, and yours is not.

Worse again, there's some evidence that people are deliberately and maliciously using proxy servers to cache copies of web pages, then using normal (white and black hat) Search Engine Optimization (SEO) techniques to make those proxy pages rank in the search engine, increasing the likelihood that your legitímate page will be the one dumped by the search engines' duplicate content filters. Danger Will Robinson!
Even worse still, some of the proxy spiders actively spoof their origins so that you don't realise that it's a spider from a proxy, as they pretend to be a Googlebot for example, or from Yahoo. This is why the major search engines actively publish guidelines on how to identify and validate their own spiders.

Now for the big question, how can you defend against this? There are several possible solutions, depending on your web hostíng technology and technical competence:

Option 1 - If you are running Apache and PHP on your server, you can set the webhost up to check for search engine spiders that purport to be from the main search engines, and using php and the .htaccess file, you can block proxies from other sources. However this only works for proxies that are playing by the rules and identifying themselves correctly.
Option 2 - If you are using MS Windows and IIS on your server, or if you are on a shared hostíng solution that doesn't give you the ability to do anything clever, it's an awful lot harder and you should take the advice of a professional on how to defend yourself from this kind of attack.

Option 3 - This is currently the best solution available, and applies if you are running a PHP or ASP based website: you set ALL pages robot meta tags to noindex and nofollow, then you implement a PHP or ASP scrípt on each page that checks for valid spiders from the major search engines, and if so, resets the robot meta tags to index and follow. The important distinction here is that it's easier to validate a real spider, and to discount a spider that's trying to spoof you, because the major search engines publish processes and procedures to do this, including IP lookups and the like.

So, stay aware, stay knowledgeable, and stay protected. And if you see that you've suddenly been dumped from the Search Engine Rankings Pages, now you might know why, how and what to do about it.

About The Author:
Sophie White is an Internet Marketing and Website Promotion Consultant at Intrinsic Marketing an SEO and Pay-Per-Click firm dedicated to supplying Better Website ROI.

How to Build a Better Website?

How to Build a Better Website Without Building a Website?
By Richard D S Hill (c) 2007

The most important thing to consider, when first thinking about any website, is the user. Like so much marketing, websites are, unfortunately, too often developed 'inside out' (company focused) rather than 'outside in' (customer focused).
All website users have their own reasons and objectives for visiting a site. No matter how targeted, any website has to communicate with a wide range of individual users.


To be successful, therefore, every site has to give each and every user a thorough but simple presentation of the site's content so that the site achieves your objectives e.g. registrations, leads, sales.

To do this successfully, users want:


Simple Navigation

Navigation that is clear and consistent.
Probably the worst issue is 'lost visitors' – those who are in a maze and don't know where they are in the site.
The site should always allow users to easily return to the home page and preferably get to any page with one clíck.
Studies have shown that users want to find things fast, and this means that they prefer menus with intuitive ranking, organization and multiple choices to many layers of simplified menus. The menu links should be placed in a consistent position on every page.


Clarity
Users do not appreciate an over-designed site.
A website should be consistent and predictable. For maximum clarity, your site design should be built on a consistent pattern of modular units that all share the same basic layout, graphics etc.

Designing Websites That Meet Their Objectives
Everything above is pretty simple, but how do you ensure that you can achieve it?
The answer is website architecture – an approach to the design and content that brings together not just design and hostíng but all aspects of function, design, technical solutions and, most importantly, usability.


The distinction may seem academic but imagine trying to publish a magazine using just graphic design and printing whilst ignoring content and editing. It just would not work yet that's what too many people still try to do.

Website Architecture
Defining a website using web architecture requires:
->Site maps
->Flow charts
->Wireframes
->Storyboards
->Templates
->Style guide
->Prototypes

This planning saves you (the client) money. The better the site map, flow chart, wireframe, storyboard, templates, style guide and prototype the more time and money you save because it gives the designer who has to do the graphics and the developer who has to do the programming a blueprint.
We are constantly amazed that people who wouldn't think about building a house, car, ship or whatever will still build a website without an architectural plan.

The benefits include:
->Meeting business goals
->Improved usability
->Reducing unnecessary features
->Faster delivery
->Site Maps

Many people are familiar with site maps on web sites which are generally a cluster of links.
An architectural site map is more of a visual model (blueprint) of the pages of a web site.


The representation helps everyone to understand what the site is about and the links required as well as the different page templates that will be needed.

Flow Charts
A flowchart is another pictorial or visual representation to help visualize the content and find flaws in the process from say merchandise selection to final payment.
It's a pictorial summary that shows with symbols and words the steps, sequence, and relationship of the various operations involved and how they are linked so that the flow of visitors and information through the site is optimized.

Wireframes
Wireframes take their name from the skeletal wire structures that underlie a sculpture. Without this foundation, there is no support for the fleshing-out that creates the finished piece.
Wireframes are a basic visual guide to suggest the layout and placement of fundamental design elements on any page. A wireframe shows every clíck through possibility on your site. It's a "text only" model to allow for the development of variations before any expensive graphic design and programming, but one that also helps to maintain design consistency throughout the site.


Creating wireframes allows everyone on the client and developer side to see the site and whether it's 'right' or needs changes without expensive programming. The goal of a wireframe is to ensure your visitors' needs will be met in the website. If you meet their needs, you will meet your objectives.
To create a wireframe requires dialogue. You and your developers talk, to translate your business successfully into a website. Nobody knows your business better than you and your developers should listen to ensure the resulting wireframe accurately represents your business. You, however, must answer the questíons; questíons such as:
->What does a visitor do at this point?
->Where can a visitor go from here?
and ignore questíons about what your visitor sees at this point. Sounds easy, but!


Storyboards
Storyboards were first used by Walt Disney to produce cartoons. A storyboard is a "comic" produced to help everyone visualize the scenes and find potential problems before they occur. When creating a film, a storyboard provides a visual layout of events as they are to be seen through the camera. In the case of a website, it is the layout and sequence in which the user or viewer sees the content or information.
However, the wireframe provides the outline for your storyboard. Developers and designers don't need to work in a vacuum - the wireframe guides every design, information architecture, navigation, usability and content consideration. Wireframes define "what is there" while the storyboards define "how it looks".

Templates and Style Guide
Templates are standard layouts containing basic details of a page type that separates the business (follow the $) logic from the presentation (graphics etc) logic so that there can be maximum flexibility in presentation while disrupting the underlying business infrastructure as little as possible.
Style guides document the design requirements for a site. They define font classes and other design conventions (line spacing, font sizes, underlining, bullet types etc.) to be followed in the Cascading Style Sheets (CSS) used to provide a library of styles that are used in the various page types in a web site.

Prototypes
A prototype is working model that is not yet finished. It demonstrates the major technical, design, and content features of the site.
A prototype does not have the same testing and documentation as the final product, but allows client and developers to make sure, once again, that the final product works in the way that is wanted and meets the business objectives.
Once you have built your virtual site, it's a lot quicker, easier and cheaper to build the real one.


About The Author:
Richard Hill is a director of E-CRM Solutions and has spent many years in senior direct and interactive marketing roles. E-CRM provides EBusiness, ECommerce and Emarketing and ECRM. http://www.e-crm.co.uk/profile/message170807.html

Best Page Layout & Design for Websites

The Best Page Layout and Design for Content Websites
By Miles Galliford (c) 2007

I was chatting to a veteran print publisher who had been producing magazines for over thirty years.

He shook his head in despair, as he told me that every year he sees new magazines hit the newsstands with the publications' titles placed vertically on the magazine cover.

"Whenever I see this," he said, "I know it has been produced by a new publishing company that does not understand the industry. Anyone with any experience of periodical publishing knows that publications with vertical titles fail, or at least have to change quickly to survive. The market has taught us this lesson hundreds, if not thousands of times, but still people make the same mistake."

This message is just as relevant to website layout as it is to magazine design. The web has been around for long enough that rules and best practices have emerged from years of tríal and error by thousands of website owners. You can either go with the flow and be grateful that you can learn from the experience of others, or you can swim against the tide and try to convince the market that you are right and they are wrong.

I would suggest that swimming downstream is far easier and will give you a much greater chance of success.

To understand which layouts work you only need to look at the industry gorillas. These are the online content publishers who have been around for years, and who have tested just about every layout combination. Good examples are some of the most read websites on the internet including:

- BBC

- The Financial Times

- The Economist

- The Wall Street Journal

You will quickly start to recognize elements of the page layouts which are common across all these sites. Just as with print newspapers and magazines, these are the layouts that have proven to sit most comfortably with the reader and with the way online users want to consume content.

The key design and layout elements which should remain constant are: Masthead across the top - the masthead is where the logo goes and usually the imagery that supports the subject matter on the website.

The left hand column should contain all the primary navigation, which should remain constant across the whole website. It should líst all the main categories of the website, so users can find their way around from every page.

The right hand column on the homepage should provide navigation to individual pages in the site which you want to highlight. Or, it can be used for small applications, such as email newsletter sign-up, scrolling news headlines, links to the forum, etc. This column tends to disappear on the content pages to leave more space for the article and images.

Top menu bar - some sites have most of their navigation in the top menu bar which goes across the page under the masthead (take a look at Guardian or Forbes as examples). I don't like this for two reasons. First, it restricts the number of menu links that you can have. Secondly, it usually means that the site has flash based drop down menus to enable them to accommodate more links. Flash menus are not user friendly. They force your reader to search for links to the content they are looking for. Don't make your user work for their answers. Also, search engines find it harder to index sites with flash menus.

Bottom menu bar - This strip at the foot of every page tends to contain links to the site's terms and conditions, privacy statement, sitemap, etc.

The central column contains the content. On the homepage, this can be a combination of an introduction to the website and teasers to articles. On the content pages, the articles and images sit in the central column.

Search top right on every page - this is the search box used to search the content of the website. This is a less rigid placement than it used to be, but you can't go wrong if you place it top right.


Time and date - usually placed on the right hand side under the masthead. This is optional, but does give readers the impression that the site is up-to-date.

Within this layout there is a great deal of flexibility to add your own personality and styles, particularly when you overlay your design on the basic page structure. However, at all times your number one goal should be constant; that is to make your website simple and intuitive, for every reader that visits. To achieve this learn from those sites that have a lot of experience.

Don't be the person that puts a vertical title on the front cover!


About The Author
SubHub provides an all-in-one solution to enable you to rapidly design, build and run your own content website. Publish for profít on the web. Website: SubHub
SubHub Articles Feed

Web 2.0: Are We Bowing To A False Messiah?

Web 2.0: Are We Bowing To A False Messiah?
By Barry Densa (c) 2007

Are you absolutely beside yourself - giddy with delight because Web 2.0 has finally arrived to help you sell more, sell faster, make you richer, smarter, sexier, and lower your triglyceride levels?

Whoops, I'm sorry... do you even know what Web 2.0 is?

Web 2.0, in a nut shell, is the latest evolution in the online experience. The World Wide Web is now... ready... here it is: a tad more interactive, technologically speaking.


Where does Web 1.0 end and Web 2.0 begin? Beats the heck out of me! I'm basically your average techno-phobe - the proverbial anti-Geek, if you will.

Nevertheless, Web 2.0 typically refers to an assortment of internet-based communication tools and services - such as social networking sites, wikis, and "new and improved" chat functionalities.

Writing a book review on Amazon.com is apparently considered Web 1.0 technology.

I know all of this is a big whoop for some of you, but for others it's equivalent to the coming of the messiah (for either the first or second time, depending on which operating system you're laboring under).

Is Web 2.0 a Boon or a Bane for Consumers and Countries?

For most marketers, their company's website has been a rather static billboard of sorts. But now, thanks to Web 2.0, a website can provide visitors, prospects, customers and selected victims, with a certain degree of "give and take".

You can talk to them, they can talk to you; you can learn more about them, they can learn more about you; they can "experience" you, you can "experience them" - in short, the level of communication through a computer screen has been enhanced.

Some though fear that Web 2.0 will enable online marketers to become even more intrusive and annoying... or liberating. China, Saudi Arabia and other fundamentalist and ideologically illogical regimes could be in for a big-time headache.

Nevertheless, Web 2.0 will eventually give way to Web 2.5, then Web 3.0 and 4.0 and so on, until ultimately, long after we're all dust, a computer screen will become a real - not virtual - portal into whatever exists on either side of the screen.

Actually, there probably won't be a screen anymore; it'll be more like a turnstile. Yes, the veil will have been lifted. And the tag line, "Reach out and touch someone", will have reached its fullest potential.

Here's the Problem...

Nothing has really changed. Web 2.0 will not sell your product or service for you. Web 2.0 will not negate the importance of salesmanship in print, in video, in audio, or any permutation or combination not yet assembled.

All the "old" requirements and admonitions about how to sell, and sell well, are still in full force.

The Top 10 Steps to Sell Your product - Even When Using Web 2.0

1. You need to identify a qualified market - those who are ravenously hungry for your product or service. Throwing mud on the wall and praying it will stick, won't work - nevër has, nevër will.

2. You need a hi-quality product or service that will satisfy your market's hunger, or fix their pain. No snake-oil scams permitted.

3. You need to know how to grab your market's attention in a stimulating and compelling way, so they know your product or service exists. Waiting for the telephone to ring is not a marketing strategy.

4. You need to prove your product or service's value, unequivocally detailing at length - why and how your product is worth the price asked. Nothing is obvious when it comes to selling.

5. You need to make an irresistible offër. Why must your target market buy your product or service - and buy it now. Not to buy and not to believe is everyone's natural first choice.

6. You need to remove all risk - by offering a solid, confidence-building guarantëe. "Trust me" is not a guarantëe.

7. You need to anticipate all possible objections, and overcome them. And don't think for a moment there won't be any. There will always be objections and concerns - especially for a first-to-market product or service.

8. You need to ask for the order! Bashfulness and timidity has no place in sales. Ask, and only then shall you receive. Forget this, and you can forget the sale.

9. You need to clearly explain what your prospect must do, step by step, in order to buy, subscribe or inquire. Lead them to your order page.

10. Take nothing for granted.

Web 2.0 is a tool - another road to get you to market. It will not replace salesmanship. It can though make online marketing and sales more effective... if you know what it takes to wrap up a sale in the first place.


About The Author
Barry A. Densa is one of America's top freelance direct response copywriters. Visit WritingWithPersonality.com and see how Barry easily and quickly converts prospects into buyers using "salesmanship in print" - and while there, sign up for his highly regarded Free ezine, Marketing Wit & Wisdom!

10 Truths About Obtaining Better Google Rankings

10 Truths About Obtaining Better Google Rankings
By Kevin Gallagher (c) 2007

Introduction
I have read hundreds of articles telling me how to get better rankings in Google. Some of this advice was very good and some was not. Here you will find 10 truths about getting better rankings in Google that I personally have found to be true after years of research. So let's cut through the fat and get to the lean meat of the subject.

1. The Quick Fix
First the bad news, unfortunately there are no quick fixes in creating higher rankings in Google. You have to have a lot of patience in the search engine optimization game. It will take months for your efforts to come to fruition. That's why it's important to get things right from the start and plan out your strategy.


2. Keywords
Keywords are the most important part of search engine optimization. You must do your keyword research before starting your website if you can, because this will form the basis of all your search engine optimization.

There is no point going for broad keywords such as "website design" since there is too much competition for those keywords and you will find it very difficult, if not impossible, to reach the top spot in Google. You are better off using long tail niche keywords. They will have a smaller search volume, but it will be easier to obtain top position. People are more likely to find what they are looking for with long tail keywords. For example, if someone needs a website, they may type " web design" into Google and visit a few websites. They may then discover they also need hostíng and a domain name and do another search for "website design hostíng and domain name services" and this may be your niche keyword or key phrase.

How do you find keywords that people are searching for? Well a good free tool can be found at SeoBook or, if you want something more professional, you can use wordtracker an excellent service for finding niche keywords. You should try and get at least 10 keyword phrases.

Once you have found your keywords, do a search with them on Google. First of all look at how many results there are. If it's in the millíons, then maybe your keywords are not that good and would be too competitive.

If you can find keywords with results at about 50,000, then you could be onto a wínner. You should also chëck out your competition. Clíck on the top result for your keyword in the SERPS (search engine results pages) and chëck out their pagerank. This will give you a rough idea of what you need to achieve to get top placement. Also, you should chëck to see how many links they have pointing to their website as this will give you a rough idea of how many links you will need to get to the top position. To do this, in the search box type link: www.thedomain.com and you will get a líst of websites that link to that domain, but it's a good idea to do this in the Yahoo search engine because it provides a more extensive líst of back links. Google will only show you a percentage of their links, usually pagerank 3 or higher.

Remember, these are only rough estimates because every website is different and less, more relevant links will achieve better results.

3. Title Tag
Google sees the title tag as the most important and relevant part of the webpage it retrieves. This is one of the few things you have any control over in Google's search results. The title tag is the underlined header for your result in the SERPS. It also appears at top of your browser window. Keep this descriptive and readable but at the same time include your newly found niche keywords. Google will also highlight the keywords in your title that were included in the search query.

4. Description Tag
The description tag is the description of the webpage which resides under the title tag in the results. Again use your keywords in here, maybe some of the lesser ones you discovered. This is the only other part of the results you have any control over. Google will also highlight the keywords in here that match the search query. Again remember to keep it descriptive and readable.

5. Domain Names
If you can, try and include your main keywords in your domain name. Google will highlight them when they match the search query. This can give your ranking a little boost bcause it will show that your website is relevant to the search query.

6. Content
Content is very important. If you have ever changing fresh, unique content on your website relating to your topic, Google will love you for it and other websites will link to you. In return, this will íncrease your rankings, but you should really be doing this anyway. A website with no changing content is a dead website. Your content should contain your keywords, but don't sp@m your content with your keywords. Use them at the start and end of your webpage and sprinkle them in-between. Also use them in your header text and even bold a few as this shows Google that these words bear more importance.


7. Pagerank
Why are people so obsessed with that little green bar on the Google tool bar? Well I'm here to tell you that you can stop obsessing about it right now.

The thing about the pagerank bar is it can be at least 3 months out of date as Google only updates it in roughly a 3-month cycle. Only Google knows your true pagerank which changes all the time. Google regularly spiders your website and scans for new content and links to show the most relevant content in its results. Therefore pagerank is pretty inaccurate.

The other thing people get confused about is that it's called pagerank not siterank. What I have determined is that your website will get assigned a pagerank figure and then it will be distributed through your indexed pages, for example if your website gets a figure of 5, then your home page may get a pagerank of 3 and your other pages get a 2 or maybe a 1 and so on. If these other pages also have links to them, this will íncrease their own individual pagerank.



The only advantage of that green bar that I can see is for exchanging links. You can get a rough idea of what a website's ranking is and you can decide whether or not to exchange links.

8. Linking
One-way links are better than 2 way links, but one-way links can be harder to obtain. Why should someone put your link on their website; what's in it for them? You can do this by writing articles like this one and submitting them to article websites, social media websites or on your own blog, but remember to add an author's bio which includes some links to your website.

Reciprocal links are easier to come by, but in the early stages, when you don't have a good pagerank will be more difficult to obtain. Once your pagerank increases you can be more selective of the pagerank you exchange with.

Don't forget about the guys starting out when your green bar starts to íncrease. If they have a website with good quality content, then you should consider linking with them. Remember we all need to start somewhere and today's page rank of 1 is tomorrow's pagerank 5. Try to link with relevant websites because Google likes this, and you will receive quality traffíc from these websites for years to come.

Also, I have found a great little tool which checks potential link partners to see if they are linking to bad neighbourhoods. A link exchange with a penalized website could also result in a Google penalty for your site. The tool can be found at:

http://www.bad-neighborhood.com/text-link-tool.htm

Editor's Note: The page at the above URL might not be visible in all web browsers but is visible in Internet Explorer.

9. The Open Directory (DMOZ)
You should always submit your website to DMOZ since it can take an age to get listed there and Google uses these results in its organic results sometimes. I recently wrote an article discussing this topic and some people commented on this and said that they haven't submitted to DMOZ and their rankings are fine. This may be true, but one thing you should remember is that lots of directory websites use DMOZ results, which in turn will get you more one-way links.

10. Blogs
Blogs are loved by Google because they have lots of text and are constantly getting updated; so start your own blog on your website. Include articles, stories and anything that's related to your website. If you give people something of interest, they will come back for more and link to you.

That's all for now, take care and good luck! And remember, you only get out of something what you put in to it.


About The Author
Kevin Gallagher is the managing director of Umbrella a custom website design company in the Scottish Borders providing small business website design, website builder software and affordable company SEO services.

The Impact of Social Media on Search Rankings

The Impact of Social Media on Search Rankings
By Susan Esparza (c) 2007

Over the past few years, the Internet has increasingly become a participatory social network where user-generated content is just as important as traditional advertising messages. This means your articles, blog posts, videos, podcasts, and other comments on the Web are now critical sources of information about your company, your products and services. This phenomenon has given consumers a voice and weakened the power formerly held by advertising media. Social media, therefore, becomes increasingly important to a Web site's success and its visibility in search engines.

Not long ago, search engine optimization focused on fine-tuning your on- and off-page Web site elements in order to achieve better rankings in the search engines. While on-page elements remain the fundamental building blocks of your SEO campaign, it is no longer the entirety of the puzzle. With the rise of social media, it is more important than ever to create and optimize many different types of content in order to dominate the SERPs. The íncrease in user-generated content, and implementation of Google search personalization and universal search, has helped bring this about.

Search Personalization

In personalized search, individual user search results are reordered based on their previous search behavior and other indicators. Pages can move up or down based on the influence of a user's Google home page content, bookmarks, search history, Web history, etc. While Google is the only search engine currently adjusting rankings using personalization factors, Yahoo and Ask have variations on this theme with MyWeb and MyStuff.

Google's reasons for initiating search personalization are that it delivers more relevant results and can reduce sp@m. Others have challenged this rationale, stating that user interests are not static and can vary by season, mood or other factors. It's also difficult to know user intent based on clíck behavior, as sometimes when people clíck on a link they'll immediately realize this wasn't what they wanted and clíck off. Queries can also be hit and miss, landing users on non-relevant sites which would then be used in creating non-relevant future results for that user.

Because of personalized search, optimization techniques will change, requiring more intense multivariate analyses in the competitor landscape since the leading competitors will vary as the SERPs vary. This will affect analyses of competitor on-page and off-page factors, especially keyword analysis. However, all the basic optimization tactics remain important. Content, in particular, must do a better job of telling search engines what the page is about, and this will result in better rankings for those able to do so.


Universal Search

With the advent of universal search by Google and others, search marketers and site owners will soon find it necessary to optimize their Web sites for a broad range of content types. This means creating content in every media and vertical niche applicable to your brand. Compelling, useful and widely propagated content will create more search visibility and Web site success.

Fresh content will bring repeat visitors and íncrease the odds that other users and Web site owners will want to share your content with their visitors, creating more backlinks. For most brands, the benefit of encouraging social networking activities is increased search visibility.

Search engine optimization techniques vary depending on the type of content being optimized. We've written before about optimizing content for Google image search, video search, news, maps and blog search. Two other areas you can optimize content for are podcasts and your Google Base data feeds.

Optimizing Podcasts

To create a podcast, you must record an audio file to be uploaded to the Web. Once uploaded, users will be able to download this rich media file and listen to it via an iPod or some other media player.

Up until recently, multimedia search engines relied on metadata to determine relevancy of rich media files. However, this was insufficient for finding relevant podcasts because the average podcast is 15 to 20 minutes long and has only 25 to 30 words describing it.

Currently, speech recognition technology is used to determine the relevancy of audio files. Speech recognition and extracting podcast content is essential for indexing content and making it findable by users. One way to do this is to play audio snippets to determine the relevancy of the terms within a podcast.


When optimizing your podcast ensure your content is easily found by promoting only one feed. Optimize the audio file, and then optimize a landing page for each episode in addition to your category page. Make your subscription information visible on landing pages. Create valid feeds and validate them with a feed validator tool such as FeedValidator.org or the W3C Feed Validator.

Your podcast should have a unique, keyword-rich Title tag explaining the subject matter. The landing page should contain a link back to your Web site. The publication date is important. This tag specifies the last time the feed was updated. Include image tags if applicable.

Since iTunes does not redistribute, we recommend building a separate feed for iTunes. You can promote with three separate feeds, a media feed, a 2.0 feed and an iTunes feed. Include a transcript or a summary of the podcast on the landing page, depending on the podcast length. If it is brief, only a summary reviewing the main points is necessary.

Optimizing Google Base Data Feeds

Google Base is a database where you can upload all kinds of online and offline content for sale. Your items will include labels and attributes to help describe the content you are uploading, making it searchable for users. Attributes are the words that describe the characteristics of your items. You can enter multiple values separated by commas for any given attribute. Labels are keywords that can be used to classify or describe your item, such as products, services, and even a house for sale.


The items you submit to Google Base will go in the Base directory, and some items, depending on relevance, might also go into the Google SERPs, Froogle or Google Maps. So the quality of your data is important if you want it to be found far and wide.

Use Google Base custom attributes to optimize your feeds. Google Base allows you to specify your own custom attributes, which means you can include additional information about your items. Unlímited custom attributes can be included in your tab-delimited bulk upload file. Detailed descriptions can make your items more relevant, getting them into the Google index and other vertical databases, providing more opportuníty for them to be found.

Since many of those uploading their data feeds to Google Base don't know about the custom attributes feature, you would gain a significant advantage because your feeds will be more successful than those of your competitors.

Another way to gain competitive advantage is to completely automate your Google Base data feeds. By automating your feeds, you ensure that the information uploaded to Google Base is up-to-date and accurate.

Automate your Google Base data feed by connecting it directly with your database with a process that pulls the most recent data once a day, submitting a new bulk upload to Google Base on a regular basis. Outsourcing this task takes about one day's time for setup, and then it becomes automated. One resource for such e-marketing services is Hudson's Horizons.

Though the fundamentals remain the same, search engine optimization is an ever evolving industry, adapting as the search landscape continues to change. It is now important to create and optimize many different types of content to dominate the SERPs. Optimizing your podcasts and Google Base data feeds will go a long way toward expanding search visibility.

About The Author
Susan Esparza is the Senior Editor at Bruce Clay. She joined Bruce Clay in November 2004 and has written extensively for clients and internal publications. She also knows where the knives and forks go in a buffet line. The latter makes her invaluable to the Bruce Clay organization.



Improved Search Engine Rank: Google Page Rank Misconceptions

Google Page Rank Misconceptions
By Peter Nisbet (c) 2007


Improved search engine rank is attainable through good search engine optimization, part of which is the maximizing of your Google Page Rank through intelligent linking with other web pages. In this first part of 2 on the subject of Google Page Rank, we will look at the argument for attaining high listings through a linking strategy.

Google Page Rank is a buzz term at the moment since many believe it to be more important to your search engine listing than search engine optimization. If we ignore for the moment the fact that Page Rank is, in itself, a förm of SEO, then there are arguments for and against that belief.

Before we investigate these arguments, let's understand some fundamentals of search engine listings. First, most search engines líst web pages, not domains (websites). What that means is that every web page in a domain has to be relevant to a specific search term if it is to be listed.

Secondly, a search engine customer is the person who is using that engine to seek information. It is not an advertiser or the owner of a website. It is the user seeking information. The förm of words that is used by that customer is called a 'search term'. This becomes a 'keyword' when applied to a webmaster trying to anticipate the förm of words that a user will employ to search for their information.

A search engine works by analyzing the semantic content of a web page and determining the relative importance of the vocabulary used, taking into account the title tags, the heading tags and the first text it detects. It will also chëck out text related contextually to what it considers to be the main 'keywords' and then rank that page according to how relevant it calculates it to be for the main theme of the page.

It will then examine the number of other web pages that are linked to it, and regard that as a measure of how important, or relevant to the 'keyword', that the page is. The value of the links is regarded as peer approval of the content. All of these factors determine how high that page is listed for search terms that are similar contextually to the content of the page.

Without doubt, there are web pages that are lísted high in the search engine indices that contain very little in the way of useful content on the keywords for which they are listed, and have virtually no contextual relevance to any search term. However, a careful investigation of these sites will reveal two things.

The first is that many such web pages are frequently lísted highly only for relatively obscure search terms. If a search engine customer uses a common search term to find the information they are seeking, they will very rarely be led to a site that has little content other than links, but it is possible. The second is that they contain large numbers of links out to other web pages, and it can be assumed that they have at least an equal number of web pages linking back.

It is possible to find such web pages for many keywords. An example is on the first page on Google for the keyword 'Data VOIP Solutions'. There is a website there that is comprised only of links. The site itself has little content, but every link leads to either another website that provides useful content, or another internal page full of more links and no content. That is how links can be used to lift a web page high in the SE listings.

Such sites frequently contain only the bare minimum of conventional search engine optimization, but the competition is so low that they gain high listings. You will also find them to contain large numbers of internal pages, every one of which contain the same internal and external links.

It is true, therefore, that it is possible to get a high listing without much content, but with a large number of links. However, is that a legítimate argument for those promoting links against content? Could you reasonably apply that strategy to your website? Could a genuine website really contain thöusands of links to other internal pages and external pages on other websites, and still maintain its intended purpose?

In the second part of this article, titled 'Search Engine Rank: Google Page Rank Misconceptions' I will explode some myths about Page Rank, and explain how many people are wasting their time with reciprocal links, and perhaps even losing through them. It may be that a linking strategy is not so much an option, as a choice between the type of website that you want: to provide genuine information or to make monëy regardless of content.

Improved search engine rank might be synonymous with Google Page Rank, but perhaps only if you want to sacrifice the integrity of your website.

Part 2

Improved search engine rank is difficult enough to obtain without you having to trawl through all that has been written about Google Page Rank in order to find the truth. There are many misconceptions about Page Rank, and Part 2 of this article dispels the most common of them, the first being that Yahoo and MSN have their own version.

In fact this is not so. Yahoo had a beta version of a 'Web Rank' visible for a while, ranking complete websites, but it is now offline. MSN has no equivalent as far I can ascertain. The term 'PageRank' is a trade mark of Google, which is why I refer to it as Page Rank and not PageRank. A small difference, but a significant one.

If you are one of those that believe that the more links you can get to your website the better, then you are wrong. When Google started the Page Rank frenzy by putting that little green bar on their toolbar, they didn't realize the consequences of what they were doing. People fought to get as many links to their website as possible, irrespective of the nature of the websites to which they were linking.

That is misconception Number 2. You do not link to websites, you link to web pages, or should I say, you get links back from web pages, not websites. It is, after all, the link back that counts isn't it? The link away from your site doesn't count. Wrong! Misconception Number 3. The link to your web page counts no more than the link away from your web page. In fact, it could count less. You could löse out in the reciprocal linking stakes if your web page is worth more than the other person's.

Let's dispel that misconception right now. When you receive a link from a web page (not web site) you get a proportion of the Google Page Rank of that web page that depends on the total number of links leaving that page. When you provide a link to another web page, you give away a proportion of your Page Rank that depends on the number of other links leaving your web page.

The Page Rank of the website you get a link from is irrelevant, since that is generally the rank of the Home Page. You will likely find that all these great links you think you have from PR 7 or 8 websites are from a links page that has a PR of ZERO! So you get zilch for the deal. If you are providing them with a link from a page on your site even of PR 1, then you löse! Most people fail to understand that.

No incoming link can have a negative effect on your PR. It can have a zero effect, but not negative. However, if you have an incoming link with zero effect, and an outgoing reciprocal link with a positive effect to the target page, then you will effectively löse PR through the deal. Every web page starts with a PR of 1, and so has that single PR to share amongst other pages to which it is linked. The more incoming links it has, the higher PR it can have to share out.

If your page has a PR of 4 and has three links leaving it, each gets twice the number of PR votes than if 6 links leave it. Your page with a PR of 4 has to get a similar number of PR votes incoming as it gives away to retain its PR. In simple terms, if your PR 4 page is getting links from a PR 8 page with 20 links leaving it, you löse out big time! It's simple math.

No page ever gives away all of its PR. There is a factor in Google's calculation that reduces this to below 100% of the total PR of any page. However, that is roughly how it works. You don't get a proportion of the whole website ranking; you only get part of the ranking of the page on which your link is placed. Since most 'Links Pages' tend to be full of other outgoing links, then you won't get much, and will likely get zero.

That is why automated reciprocal linking software is often a waste of time. If you want to make the best of linking arrangements, then agree with the other webmaster that you will provide each other with a link from equally ranked pages. That way both of you will gain, and neither loses. Some software allows you to make these arrangements.

Another misconception is that only links from external web pages count. In fact, links between your own web pages can be arranged to provide one page with most of the page rank available. Every page has a start PR of 1, so the more pages you have on your site then the more PR you have to play with and distribute to pages on your website of your choice.

Search engine rank can be improved by intelligent use of links, both external and internal, but Google Page Rank does not have the profound effect on your search engine listing that many have led you to believe. Good onsite SEO usually wins so keep that in mind when designing your website.


About The Author
Peter normally has his new websites listed on Google, Yahoo and MSN within two days, and consistently gets high search engine listings. His website Improved Search Engine Rank offers to show you how exactly how he does it, including how Page Rank and SEO can be used together to achieve the highest listings for your keyword.

Add to My Yahoo!                       
Custom Search