I'm currently working on a Single Page Site (SPS).
This SPS consists of multiple sections arranged vertically and can be accessed by scrolling. Aside from a dynamic news section, all the content on the page is generated server-side.
There isn't a need for multiple indexable pages, however, I am using History.js to create URLs for the various sections:
example.com/introduction
example.com/about
example.com/news/some-story
example.com/news/another-story
Everything is functioning smoothly. Due to my requirement to support IE9, if the URL isn't at the root, I perform a client-side redirect to the root and utilize History.js's hash fallback:
example.com/#/introduction
example.com/#/about
example.com/#/news/some-story
example.com/#/news/another-story
All of this is operating effectively. However, I have some concerns regarding how Google will crawl and index these pages. Since the content on each proper URL will be identical, except for news stories triggering server-rendered initial story, I fear that Google may penalize the site for duplicate content issues.
Is there a way to avoid being penalized by Google without serving unique content for every page (as it isn't a necessity)? Would utilizing rel='canonical'
suffice?:
<link rel="canonical" href="http://www.example.com">
To clarify, my worries are focused on the proper URLs manipulated via the HTML5 history API through History.js, not the fallback links with hashes.