A lot of things could happen when you have duplicate content on your website. Some of the common issues include poor ranking position, drop in position ranking (a lot of people see this as Google penalty, but it's not always the case) or not getting indexed at all. Here is a list of some causes of duplicate content issues with possible solutions:
- Similar Meta tags
- Article Syndication
This is one of the more popular duplicate culprits. When you don't set up a default homepage URL you might have this issue. Let's have a look at what a homepage URL could possibly look like:
On a good day, these four pages should all take you to the same page, your homepage. However, if not setup correctly, all four pages could be found on the Internet. That means each of these pages would be competing against each other. People could link to each of these pages randomly and lead to confused readers and customers. Or worst still, loss of link juice which could have all gone to one homepage as opposed to four 'different' home pages.
- set up Canonical tags as part of your Meta tags
- set up permanent redirects to your preferred home page
Page pagination is used for continuation of content on a different web page. This is normally used when you don't want a page to be too long; after all, most people don't like scrolling down that long. The best solution is cut the page in two and continue on the next page. However you really don't want the page 2 to be found on the Internet and/or compete with page 1 since they are the same thing. This could lead to a duplicate issue.
- set up restrictions on your robots.txt files (this is where they come in handy, but use with caution!)
- re-categorise tags and update meta tag content
Similar Meta Tags
Some Content Management Systems (CMS) are notorious for setting your blog/company name first followed by page titles, not allowing personalised meta descriptions. Yes, meta descriptions are not as relevant as they used to be but (page titles still are very important) you want to avoid ambiguity and confusion for the crawlers. When they come to your page and find similar page titles and meta descriptions they might assume it a duplicate contet and not index it.
- have unique meta tags and title tags
- if you must use your blog/company name, let it come at the end of the title tag
This is one of the reasons why article syndication is not that great for link building any more; It suffers from the duplicate content issue. Crawlers tend to only index the first content it comes across and ignores the other duplicates. However, some people minimally tweak the content then syndicate it. That's a good idea, but there is only so much tweaking you can do, if you ask me. There is no hard and fast rule to it, sometimes it works, and sometimes it does not, trial and error, I guess.
These are some of the duplicate content issues I've come across with possible solutions. If you've come across others please feel free to share, I'll love to hear about it, including any solution you've found for the problem.
Photo by veraecho
Photo by veraecho