Being a full-time online marketer means you have to keep a close watch on how Google is ranking pages on the web… one very serious concern is the whole issue of duplicate content. More importantly, how does having duplicate content on your own scrape google site and on other people’s sites, affect your keyword rankings in Google and the other search engines?
Now, recently it seems that Google is much more open about just how it ranks content. I say “seems” because with Google there are years and years of mistrust when it comes to how they treat content and webmasters. Google’s whole “do as I say” attitude leaves a bitter taste in most webmasters’ mouths. So much scraping google so, that many have had more than enough of Google’s attitude and ignore what Google and their pundits say altogether.
This is probably very emotionally fulfilling, but is it the right route or attitude to take? Probably not!
Mainly because, regardless of whether you love or hate Google, there’s no denying they are King of online search and you must play by their rules or leave a lot of serious online revenue on the table. Now, for my major keyword content/pages even a loss of just a few places in the rankings can mean I lose hundreds of dollars in daily commissions, so anything affecting my rankings obviously get my immediate attention.
So the whole tricky issue of duplicate content has caused me some concern and I have made an ongoing mental note to myself to find out everything I can about it. I am mainly worried about my content being ranked lower because the search engines think it is duplicate content and penalizes it.
My situation is compounded by the fact that I am heavily into article marketing – the same articles are featured on hundreds, some times thousands of sites across the web. Naturally, I am worried these articles will dilute or lower my rankings rather than accomplish their intended purpose of getting higher rankings.
I try to vary the anchor text/keyword link in the resource boxes of these articles. I don’t use the same keyword phrase over and over again, as I am nearly 99% positive Google has a “keyword use” quota – repeat the same keyword phrase too often and your highly linked content will be lowered around 50 or 60 places, basically taking it out of the search results. Been there, done that!
I even like submitting unique articles to certain popular sites so only that site has the article, thus eliminating the whole duplicate content issue. This also makes for a great SEO strategy, especially for beginning online marketers, your own site will take some time to get to a PR6 or PR7, but you can place your content and links on high PR7 or PR8 authority sites immediately. This will bring in quality traffic and help your own site get established.
Another way I combat this issue is by using a 301 re-direct so that traffic and pagerank flows to the URL I want ranked. You can also use your Google Webmaster Tool account to show which version of your site you want ranked or featured: with or without the “w w w”.
The whole reason for doing any of this has to do with PageRank juice – you want to pass along this ranking juice to the appropriate page or content. This can raise your rankings, especially in Google.
Thankfully, there is the relatively new “canonical tag” you can use to tell the search engines this is the page/content you want featured or ranked. Just add this meta link tag to your content which you want ranked or featured, as in the example given below:link rel=”canonical” href=”place your preferred link here”
Anyway, this whole duplicate issue has many faces and sides, so I like going directly to Google for my information. Experience has shown me that Google doesn’t always give you the full monty, but for the most part, you can follow what they say. Lately, over the last year or so, Google seems to have made a major policy change and are telling webmasters a lot more information on how they (Google) rank their index.