I thought this was an interesting question to ask people because I think it’s often misunderstood. Google treats content found at different URLs as if it is different content, even though it might be the same, such as in the following examples:
The ultimate goal of any spam detection system is to penalize “spammy” content.
~ Reverse engineering circumvention of spam detection algorithms (Linked to below)
Four years ago, I wrote a post about a Google patent titled, The Google Rank-Modifying Spammers Patent. It told us that Google might be keeping an eye out for someone attempting to manipulate search results by spamming pages, and Google may delay responding to someone’s manipulative actions to make them think that whatever actions they were taking didn’t have an impact upon search results. That patent focused upon organic search results, and Google’s Head of Web Spam Matt Cutts responded to my post with a video in which he insisted that just because Google produced a patent on something doesn’t mean that they were going to use it. The video is titled, “What’s the latest SEO misconception that you would like to put to rest? ” Matt’s response is as follows:
I’m not sure how effective the process in that patent was, but there is a now a similar patent from Google that focuses upon rankings of local search SEO results. The patent describes this spam detection problem in this way:
As we approach the celebration of the 4th of July, I thought it might be interesting to share a request for information made to the US Federal Register and a post on the Whitehouse blog. The US government is interested in what Artificial Intelligence might mean to the people of the United States, and how we could learn about it more. To find out, they are asking for comments by July 22, 2016.