Author Ranking in social media is more than just a popularity contest, and can include things like how frequently an author surfaces content that subsequently becomes popular, topical authority on different subjects, and popularity and influence signals.
Author Authority to Distinguish Signal From Noise?
Social media contains a lot of signal, and a lot of what might be considered noise. Within social streams of real time communication such as tweets and status updates and blog posts is information that can be invaluable on many different topics.
How does a search engine pick out which authors are actual authorities on different topics, and which are sharing and resending and adding to authoritative content? How does it tell which authors are piggybacking off such content, and which authors just really aren’t authorities on any given topic?
Some authors aren’t even real people, but instead exist as spam and/or aggregator accounts, adding little or no value to other members of a social network.
Manipulative repetitive anchor text, blog comments filled with spam, Google bombs, and obscene content could be the targets of a system described in a patent granted to Google today that provides arbiters (human and possibly automated), with ways to disassociate some content found on the Web, such as web pages, with other content, such as links to that content.
In an Official Google Blog post, Another step to reward high-quality sites, Google’s Head of Webspam Matt Cutts wrote about an update to Google’s search results targeted at webspam that they’ve now started calling the Penguin update. The day after, I wrote about some patents and papers that describe the kinds of efforts Google has made in the past to try to curtain web spam in my post Google Praises SEO, Condemns Webspam, and Rolls Out an Algorithm Change.
The patent doesn’t describe in detail an algorithmic approach to identifying practices that might have been used to manipulate the rankings of pages in search results. Instead it tells us about a content management system that people engaged in identifying content impacted by such practices might use to disassociate certain content with webpages and other types of online content.
Google published 8 patent applications at the USPTO today that describe key elements of Google Plus and a number of alternatives that may or may not become part of Google’s social network. These include 2 applications on how social connections can be sorted into different social circles, 4 filings about how content can be shared in the system, and 2 more pending patents on differences in what might be shown to the author of content created on the social network and what might be visible to people viewing that content who aren’t the authors.
The patent filings are pretty detailed, and if you’ve spent some time using Google Plus, you’ll recognize a lot of the features being described within the patents, and see some that you might wish were included and others you may hope are never added.
There are some changes coming to paid search at Google that sound exciting on the surface, but may leave many guessing how exactly those changes might manifest themselves. Over at the Inside Google Adwords blog, we were greeted with a blog post titled New matching behavior for phrase and exact match keywords on April 17th, that tells us that Google will be returning a few more results for paid advertisements that are phrase and exact match keywords. The post tells us to expect to see this start in mid-May.
While I don’t offer paid search as a service, I do often use the Google Keywords Suggestion Tool, and it left me wondering if the search volumes reported by that tool would change in response to the broader match in Google Adwords. Will it continue to show me only “exact” match volumes for keywords that I enter into the tool, or will it start reporting matches for keywords that are broader? Coincidentally, Google was granted a couple of patents this week involving search advertisements, including one on ways that the search engine might modify or expand the range of terms and phrases that advertisements may be shown for.
The first one that caught my eye was the following, which lists Ramananthan V. Guha as one of the inventors behind the patent. He’s known for a few things, including early work building the first version of RSS, as well as being a major force behind Google Custom Search Engines. He also developed Google’s version of trust rank, as an annotation system from “trusted sources” that could make search results more relevant for certain terms and phrases.
There are many sites that curate content and links on the Web, including many blogs and a number of social sites that do it through submissions by their members, who can also vote upon those submissions. The inventors of PostRank came up with an algorithmic approach to rank articles and blog posts and other content on the Web, and present it to people based upon those rankings. I’ve found a patent application at the USPTO that provides some insights and details on how their approach worked.
Google acquired PostRank last June, as was announced on the PostRank blog on June 3, 2011. Given Google’s increasing move towards looking at more social signals for the potential ranking of content shared by others, it’s worth wondering how this technology might be used by Google, and what the PostRank team might be bringing to the effort. PostRank Co-founder Ilya Grigorik, who now appears to be a web performance analyst with Google, noted in the post announcing the acquisition:
We know that making sense of social engagement data is important for online businesses, which is why we have worked hard to monitor where and when content generates meaningful interactions across the web. Indeed, conversations online are an important signal for advertisers, publishers, developers and consumers—but today’s tools only skim the surface of what we think is possible.
I was looking at the peaks and valleys of traffic in Google Analytics, and thinking of the Google Panda and Penguin updates, and couldn’t stop myself:
Wondering how long it will be before Google runs out of black and white animals to name updates after?
It’s no surprise that Google wants to not only map and provide location-based services in the world outdoors, but also for the insides of shopping malls, airports, museums, transit stations, and other large indoor spaces. A couple of recent tech posts brought to light an effort by Google to use a new chip from broadcom to possibly start supporting indoor positioning location and directions. From extremetech, we learned more about this technology in Think GPS is cool? IPS will blow your mind
The Broadcom chip supports IPS through WiFi, Bluetooth, and even NFC. More importantly, though, the chip also ties in with other sensors, such as a phone’s gyroscope, magnetometer, accelerometer, and altimeter. Acting like a glorified pedometer, this Broadcom chip could almost track your movements without wireless network triangulation. It simply has to take note of your entry point (via GPS), and then count your steps (accelerometer), direction (gyroscope), and altitude (altimeter).
In Betabeat’s Get Ready for IPS: Like GPS, Except the Signal Is Coming FROM INSIDE THE BUILDING we learned about the Google connection to IPS, or Indoor Positioning Systems. It appears that Google has already implemented this technology. I was pretty excited to read about how this kind of technology, and even more surprised to come across a new patent assignment listed at the USPTO earlier today. Google was assigned 85 pending and granted patents from Terahop Networks. The assignment was executed on 3/23/2012, and recorded at the USPTO on 2/23/2012.
Yesterday, Google’s Distinguished Engineer Matt Cutts published a post on the Google Webmaster Central Blog titled Another step to reward high-quality sites that started out by praising SEOs who help improve the quality of web sites they work upon. The post also noted:
In the next few days, we’re launching an important algorithm change targeted at webspam. The change will decrease rankings for sites that we believe are violating Google’s existing quality guidelines.
We’ve always targeted webspam in our rankings, and this algorithm represents another improvement in our efforts to reduce webspam and promote high quality content.
This isn’t something new, but it sounds like Google is turning up the heat some on violations of their guidelines, and we’ve seen patents and papers in the past that describe some of the approaches they might take to accomplish this change.