What steps can a search engine take to give a searcher more information about web page credibility that searchers might see in search results? Is it better to show searchers annotations that might provide a measure of credibility in search results themselves or something like a browser toolbar?
Might signals that measure things like credibility also be used in ranking algorithms and reranking approaches like the recent Panda ranking update at Google?
A recent whitepaper from Microsoft explores the topic of web page credibility.
Limitations of Google’s Toolbar PageRank
Google used to show searchers who had a Google Toolbar installed on their browser a PageRank Button, which Google told us was their indication of the “importance of a web page” someone might be viewing. The toolbar button provided information about the PageRank of a page at one point in time somewhere in the past.
Google’s Technology overview page told us this about PageRank:
When Google was founded, one key innovation was PageRank, a technology that determined the “importance†of a webpage by looking at what other pages link to it, as well as other data. Today we use more than 200 signals, including PageRank, to order websites, and we update these algorithms every week. For example, we offer personalized search results based on your web history and location.
The PageRank measure shown in the Google Toolbar has some limitations as a measure of the web page credibility and quality.
– It’s likely limited to just information about the quality and quantity of links to a page, for one thing, rather than many other signals that could indicate the credibility of a page.
– It’s only updated a few times a year, so the PageRank information it displays may be inaccurate, not showing PageRank at all for newer pages or pages that the search engine may not have crawled and indexed yet, or possibly showing a PageRank that may be months old.
– The PageRank measure shows a score from 1-10, without explaining that this is a logarithmic score, so that the important difference between a page showing a PageRank of 4 and another with a PageRank of 5 isn’t the same measure of the difference between a page showing a PageRank of 3 and a PageRank of 4.
Interestingly, the First PageRank Patent, Improved Text Searching in Hypertext Systems provides examples in its appendix of Backrub (Google’s original name) showing a PageRank toolbar displayed in search results, with a range of numbers that it tells us are still logarithmic, but which extend much higher than 10. For example, the image below shows “PageRank citation importance numbers” for some pages within search results on a search for [University]:
Those search results also indict several backlinks for each of the search results with a clickable link that you could use to see the backlinks for a page.
Not sure what inspired Google to not show PageRank information in search results for pages listed these days, or the number of backlinks, but I’m happy that they don’t. PageRank is only one ranking signal amongst more than 200, and it isn’t the best indicator of how “relevant” a page might be for a search result. Also, since Google doesn’t update the PageRank information in the toolbar that frequently, chances are that they wouldn’t update it frequently if it appeared in search results as well.
Microsoft Paper on Web Page Credibility
This morning, I ran across a Microsoft Research paper, Augmenting Web Pages and Search Results to Support Credibility Assessment (pdf) published to be presented at the CHI 2011 (ACM CHI Conference on Human Factors in Computing Systems), on May 7–12, 2011, in Vancouver, BC, Canada, authored by Julia Schwarz of Carnegie Mellon University, and Meredith Ringle Morris of Microsoft Research.
The paper explores information about web pages that might provide “valuable signals” regarding web page credibility, and describes the testing of visualizations of that information to searchers to see how the information might help searchers assess how credible pages in search results might be.
Search engineers’ assumptions regarding indications of things like web page credibility and quality might be used presently and in the future to rerank search results that are shown in search engines. As we’re told in the paper:
In addition to displaying credibility-correlated features (particularly expert behavior) to end-users, search engine companies might consider integrating such data into their ranking algorithms, particularly given user mental models that already assume that ranking is a proxy for credibility [17].
In addition to showing “visualizations” of the credibility of search results “adjacent” to web pages (perhaps in something like Google’s Toolbar button), the authors of the paper tell us that they thought it was important to include more compact versions of those visualizations in search results themselves because of “recent findings that many users make determinations of credibility based on search results pages.”
The paper’s worth spending some serious time with if you want to learn more about how notions of things like web page credibility and quality might impact search results in the future. Rather than a full-blown analysis of the whole paper, I thought it would be worth sharing some of the aspects of pages that the researchers considered in creating visualizations of credibility to display in search results.
Here are some of the web page credibilty signals that that they looked at:
On-Page Features
- Spelling Errors
- Number of Advertisements on a page
- Domain Type (.com, .gov. etc.)
Off-Page Features
- Awards and Certifications, such as the Webby Award, Alexa Rank, Health on the Net (HON) awards.
- Toolbar PageRank, and Rankings for Queries used in generating their data set
- Sharing information, from sites like Bit.ly and other shortening sites, Likes and shares and comments and clicks from Facebook, Clicks from shortened URLs on Twitter, bookmarks on Delicious.
User Aggregated Non-Public Data from Toolbar Usage
- General Popularity – unique visitors from users
- Geographic Reach – number of visitors from different geographic regions
- Dwell Time – amount of time users kept a URL open in their browser (as an estimate of how long they might have viewed a page
- Revisitation patterns – how often people revisited a page, on average
- Expert Popularity – the behavior of people who have been shown to have an expertise in a pariticular field, and user data about their visits to pages in those fields.
Conclusion
I’ve often pointed people in the past to the Standford Guidelines for Web Credibilty to give them some ideas on things that they can do on their websites to be perceived by visitors as having a credible site, and while those guidelines are from almost a decade ago (2002), I think they still have a lot of value. The development of those guidelines was guided by B.J. Fogg, who founded the Stanford Persuasive Technology Lab. Interestingly, there are a few references to his work and theories in the Microsoft paper.
Chances are that there are other features not listed in the Microsoft paper that could be viewed and used in an automated manner to allow a search engine to provide some measure and visualization of Web Page credibility. But there are some interesting signals listed in this paper that aren’t that intuitive, such as the social sharing signals listed in the “off-Page features” section.
What other kinds of things might a search engine program look at to gauge web page credibility?
I personally think the points that were mentioned above as features pretty much cover what I would see as the best credibility factors however, Awards and certifications would need to be governed closely as there are that many out there that themselves are not credible. It would be UN fair to re rank a site based on credibility based on the fact that it is listed on lots of award websites which in a lot of cases may have been created as a back link strategy don’t you think?
Also one thing I don’t think they should ever take into account when it comes to site credibility is domain age.
I think there is something to be said about references that are on the page that either cite, or give addition reading resources for the article. If these are garbage/scraped links from Google blog search based upon a keyword, it’s probably a less authoritative and more autogen type site. But if they are from highly trusted educational resources and hand picked links, then it might add something to the page when it comes to the overall credibility. I think the best approach will always be to look at it from a customer/reader viewpoint. Does this content (whatever it is, article, video, blog, what have you) add value in a way beyond the normal information. Is the source that is cited credible and also how much did the author do to substantiate the work.
I think other factors a search engine should include would be the contact page.
How many websites have you seen with no proper contact page or even their address on the website? In the UK it is law – I wrote about this a short while ago here: Contact details and the law
The determining factors of credibility on a contact page come with address, telephone, postal code, company number, VAT etc. It’s hard to fake some of that without being caught.
I believe Google are also using Google Places as a way of determining credibility. When you sign up to that they send you a postcard or call you at your office number to verify a PIN. This is good idea as it does identify the owner and identifies the business and website.
That’s a tough one. So many of the measurement units can be severely tainted. You would almost have to have someone like Google giving you the public values. A third party assigning values (like alexa) really means very little to the average user.
I think it is Google’s duty to start identifying patterns from the spammer sites, not just devalue sites with unpopular content. If they started to categorizing sites by type: ecommerce, blog, resource/articles, adsense-only, micro site, landing/squeeze page, etc. they could begin to identify problematic patterns within the verticals.. “auto-blogs,” for example, typically won’t allow users to get good variations in individual title tags. Spelling errors is a great idea too. I really don’t want to see bounce rate & return rates used though, because not everybody wants to give access to that data. I totally understand them using CTR’s on their own index though.. that makes sense to me.
Topsy has the right idea with influence, the measure of who will hear you if you say something. I think the ideal scenario would be that vs. link/citation equity pointing to the property.
Good post. I agree that PageRank is mostly a link-based metric, which is flawed as a way of determining how ‘credible’ a web page is. Apart from the obvious issues with SPAM, a crucial flaw of PageRank as a measure of ‘credibility’ is that it’s time-based to a large extent: it takes time for a site to garner links (and therefore PageRank) and links are also less readily made now as many users prefer to share, ‘Like’, etc.
Equally, if some of these other metrics (like traffic quality and geographic reach) were disproportionately used, a very good site could appear but fly under the radar due to an immature representation based on these metrics.
I guess where I’m tangentially leading to is: how can Google understand the ‘credibility’ of something that’s new Vs something that’s established – on a level playing field?
I could draw a parallel with a brilliant new band, way better than the established rest, but flying under the radar until breaking out. In this case, it’s usually industry buzz, and mentions by a few established and influential people (or network hubs) that builds credibility and makes the difference.
In my opinion, it’s therefore ‘expert popularity’ that’s going to become more important as this is a less time-dependent measure of credibility and is focused on the ‘now popularity’ rather than ‘established popularity’
On this front, Google seems to be developing more of an understanding of who is influential within networks, why, for what topics, and taking into account these kind of social mentions and ‘buzz’ when ranking pages. I reckon these kind of mentions will be given more importance in the future.
That is exactly what needs to happen Brent. Categories would really help to distinguish sites and separate what and how people want/need to react. Just devaluing sites gets confusing and sometimes frustrating because of the pages that are bringing them down. It can really be a guessing game out there with all the changes that are constantly being made and the people who are bringing down many sites with the lack of management and control. It’s definitely a full-time job to stay up to date on everything these days.
I never knew that google looked into off page activity (other than alexa ranking). Would it help if i improved my Facebook page so that it had more visitors
Thanks for the share. I heard that Google did enforce stricter measures on credibility. I’d sure apply these pointers to my own sites.
Hi Mark,
I suspect that we could find some other signals that might indicate credibility. The Stanford Guidelines that I linked to mention some, like including an address that people might be able to use to contact you. For example, if there’s an address on the pages of a site, and Google Maps says that the organization behind the site is actually at that address, that might be a good credibility signal.
Awards do seem a little questionable. The authors of the paper did seem pretty particular about the awards that they might look at, so the kinds of award-giving websites that you mention (using awards primarily as a way to attract links) might not be ones that they would consider.
I agree with you on domain age. Businesses sometimes change domain names for very legitimate reasons, and the cold start that new sites sometimes face because they don’t have many links to them or history behind them can be a difficult impediment for site owners to overcome. Placing too much importance on the age of a domain can potentially harm many businesses.
Hi Bill,
Interesting thoughts. What you’re referring to sounds a little like the “authority” type sites described in the HITS algorithm, where some sites are great resources to find information on a specific topic. It’s quite possible that a search engine might look at where external links point to, and how they are presented, when deciding how credible a site might be.
Hi Vincent,
It’s possible that my answer to Mark was influenced by your comments. I really like seeing contact and address information for ecommerce sites on the Web. I’m not quite sure that I care as much when it comes to sites like blogs.
The PIN verification for Google Maps seems to be a good indication of credibility too, though if an online business really doesn’t have physical locations for visitors to come to like a store front or office, or they don’t offer services in specific regions, there may not be a real need to verify in Google Maps. I like it as a credibility indication though.
Hi Brent,
If the authors of this paper were from Google, I’m not sure that we would be seeing them listing sources from outside of Google, like Alexa rankings or another search engine’s Toolbar PageRank scores. Those kinds of measures were created for completely different purposes, and as a search engineer I would want more control over the metrics used and the data behind them.
I’ve seen a number of whitepapers from the major search engines and academics on how they are attempting to identify patterns in different verticals and types of sites. The AIRWeb workshops, which include considerable input from Google, Yahoo, and Microsoft have covered topics like those.
An “influence” measure sounds pretty interesting, too. There seems to be a hint in the paper at some kind of measure like that when they discuss “sharing” information.
Hi Roger,
I agree with the need for categorizing pages in different ways, and having different expectations of them. As I mentioned in the comments above, I would really want to see contact information for an ecommerce site, but I don’t think it’s as important for a blog. I think there are other things that we might point out that may be helpful for one type of site, but not others.
Hi Twosteps,
Thank you. Good points on the limitations of metrics like PageRank, traffic quality and geographic reach. While the paper is Microsoft’s, the same issues are ones that Google needs to face. Google’s Big Panda update seems to focus on “quality,” but I suspect that credibility may be represented in some ways as well. We know that Google’s paid search quality score includes things like having a privacy policy which seems to me to be more of a credibility signal than a quality one.
Signs of credibility, whether online or offline, are often helped by being able to show some history of behavior over time. A business that’s been around for 60 years intuitively seems more credible than one that’s been around for 60 days. Can sites overcome a presumption like that with things like the use of SSL and certifications and BBB logos? Or making sure that there’s a privacy policy page, a verifiable address and contact information? Might articles about the site in mainstream media indicate how credible a site might be (somewhat like wikipedia’s notability policy for including of articles about specific people, places, and things)?
Expert opinions seem like they would be helpful in presenting a “now popularity” as opposed to an “established popularity. I like the distinction you’re making there.
Hopefully the assumptions that search engineers might be making based upon measures of things like quality and credibility are considering things like the prejudice that some metrics might have on newer sites.
Hi Craig,
The paper is Microsoft’s rather than Google’s so we can’t be sure if Google is doing something similar. One of the factors that the paper’s author’s mentioned was that they might be looking at social sharing signals like Tweets and Facebook likes.
I’d guess that chances are good that people at Google are exploring how signals like that might help them rank pages as well.
Hi Andrew,
I’ve been hearing people discussing “quality” and quality signals when it comes to Google rather than “credibilty.” But I’d guess that Google’s exploration of quality might include credibility.
There has been talk of looking at social authority I think that would be very hard to gage.
Hi Matthew,
There are a number of sites that are trying to come up with metrics to determine a “reputation” score for people engaged in social networks, and Google has been looking at those types of social signals for years.
Not too long ago I wrote about how they might give different reputation scores to people providing ratings for products and businesses in How Google may Manage Reputations for Reviewers and Raters. The ideas behind those reputation scores might translate well into scores for people active on other social networks as well.
Outstanding. The “The AIRWeb workshops, which include considerable input from Google, Yahoo, and Microsoft have covered topics like those” absolutely made my week! I’m going to check that link out right now.
BTW, Are you presenting at any of the conferences this year? I would love to sit in on that.
ps. Bruce Clay blog posted a recent Twitter study regarding influence and effectiveness of viral actions on Twitter: (link no longer available) — very interesting & figured it was right up your alley!
It’s that non-public toolbar data that I see being the most relevant information with respect to web page credibility…ultimately, that is.
What better way could there be to determine web page credibility than it’s ability to attract, retain and have repeat visitors, barring any suspicious bot activity, of course.
Mark
I think that Google has its dominance for more than its marketing and advertising power. it has the best algorithms and the others are forever playing catch up. Most people agree that bing’s results are link yahoos ten years ago. yes it gives you choice, but usually not a good one.
The only issue I (and most people) seem to have with google is getting ranked the highest, and what seo you need to do so. I dont know all of the tricks, but I did hear that they use over 200 factors in their ‘special sauce’. I agree that Page rank has its limitations, awards are massively stacked (by dodgy voting systems usually) and this is why Google keeps changing its algorithm, so we cant game it.
Content is king (etc etc), but it is often difficult to spread its reach. They say you cant submit the same article to different article directories (google penalises duplicate content), and spinning articles takes time and often isn’t worth the result (in readability and affect) thats why i think its a shame that many article sites are full of 500 word articles about nothing in particular written in the third world by someone for $5 an hour. How do you guard against a site gaining credibility from low thought articles, that are spun and flung far and wide?
When you think about the vast amount of data that computers must sift through, I think Google and its array of PC’s will remain the miracle of the decade for many years to come.
That’s a tough one. So many of the measurement units can be severely tainted. You would almost have to have someone like Google giving you the public values. A third party assigning values (like alexa) really means very little to the average user.
Hey Bill,
A lot of this stuff goes right over my head, being a relative newbie to seo, but some of it does make sense. Many do seem to think that the vaguely defined “trust” of websites will play a predominant seo role in the future. But then, tomorrow never knows, right? Cheers.
Google and other search engines should take into account all the things you mentioned, and I think the number of comments, re-tweets and natural back-links is something they’ll always take into account. I dislike when others dominate Google with the same content, but in different formats: e.g. blog post, EzineArticle, YouTube video, Slideshare powerpoint, and so on.
I guess Google would un-rank suck practices/listings and make searches relevant on the key term and the topic of it. For e.g. a search for “how to build a web site” should offer more multi-media/video related listings than text/document.
what do you guys think?
It’s interesting that google is looking at off page popularity, I can only assume that utilising social media and providing quality information that will attract repeat visitors will be an important part of maintaining a high SERP
I don’t like the BUZZ around social media. Yes, for now it is very popular. For now…
But there is a big corridor for manipulation. I think those data will be not very clear.
Hi Brent,
The AIRWeb workshops are pretty interesting, and I was disppointed that they didn’t have one in 2010, but pleased that they are in 2011.
I’m not planning to present at any of the large industry conferences this year, but I did have the chance to participate in a nonprofit tech conference in DC this month, and at some local meetups. I’d love to attend some barcamps as well, and maybe present something at one of those.
The Yahoo/twitter paper looks interesting. I’m going to have to spend some time reading it this afternoon. Thanks for pointing to it.
Hi Mark,
I suspect that you’re right that user-behavior data might be the most helpful to the search engines. I am excited that they seem interested in experimenting with a wide range of other factors as well, though.
Hi Bruce,
I still remember something a Google engineer said at a “meet the crawlers” session of a conference that I went to years ago about Google’s ability to analyze a lot of different signals and make some semblance of sense of them. He said, “We have lots and lots of computers.”
I honestly stay away from article sites. Though there may be some value in submitting some articles to them, it worries me when I see people relying upon them too much.
I do believe that content is very important, but whenever I see someone has written that, I can’t help but respond with a “context is king” in response. The right content, presented the right way, at the right time is what inspires people to take action, or to learn, or persuades them of something. Google’s increased reliance on “quality” seems to bear this out. It’s not just important to have great content, but also to present it in a way that is readable, usable, and effective (and that can be indexed by search engines so that people can find it.
Hi Stan,
Using someone else’s data to try to measure something is difficult on a number of levels. I’ve been told by a few people who’ve worked at search engines is that they difficulty they face isn’t in finding data to use to measure something, but instead deciding upon what data is the most useful, and what is the best way to use it.
Some measures, like Alexa data, have limitations in the way that the information is collected, since the people who use the Alexa toolbar are self selected, and it’s impossible to get an idea of whom they represent as users of a system or the Web. That means that some sites that may be extremely popular amongst a particular set of users may be ignored completely in rankings like Alexa. For example, the best and most credible site on the Web about subatomic particle physics may not get much traffic from Alexa users, but may be much more credible than more mainstreet science related sites online.
Hi John,
Many people do write about “trust” and “authority” but don’t really define what they mean by those terms. That tends to create a lot of confusion. How does someone measure quality or credibility or usability? There may be some things that a computer can look at to try to get a sense of those things, but the computer is looking at “indications” of quality or credibility or usability. Some of those might look to what’s on the site itself, and make certain assumptions.
For instance, a page that contains a lot of advertising is likely (or is it) focusing primarily on making money rather than sharing quality content. I’m not completely convinced that you can always make that assumption. That’s why a search engine might try to look at a wide range of features or factors in making such decisions.
Hi Codrut,
Those are some interesting thoughts. Sometimes things like pagerank or even social popularity are better indications of popularity than they are of quality.
Google’s Universal Search seemed to focus upon providing a wider range of types of sites and information, including news, blog posts, pictures, videos, book search, etc. I think in a number of ways, that was a good move.
Hi Sam,
This was Microsoft’s patent filing, but chances are that Google is exploring a lot of the same territory.
Popularity isn’t always a good standin for quality, but if a lot of people are looking at a particular page or video, for instance, there may be some value to it.
Hi Dimitry,
I agree. I think that’s way you need to look at more information that just social network based signals, including what’s actually found on the pages themselves.
I never knew Backrub being Google’s original name, interesting. These are great points, most everyone simply believes a higher page rank and optimization determine whether a website should rank, but there are a large number of other factors that play into a website’s ranking. I would love to pick your brain seeing how much you research patents, search trends, etc – thanks for always providing your thoughts into this field.
HI Bill,
Thanks for personally replying. I take note of “The right content, presented the right way, at the right time is what inspires people to take action, or to learn, or persuades them of something.” On a site this is all about knowing the market, knowing the triggers, being a master persuader or marketer. If everyone knew it, we would all be rich.
I am from the old school and find media’s fascination with social media and search engine ranking based upon it a little annoying (some because it thwarts my plans of global domination) and mostly because most social media seems to be about inane tweets or facebook posts about what I had for breakfast. Sure if something goes viral, it probably deserves to, but it is not usually based on breakthrough information that actually helps anybody.
I take it about presenting information on pages in a readily digestible way. the gen Y way of short ‘bytes’ of information, lots of graphics and text grab headings, magazine style. Though in standard CMS systems, this often ends up looking like a third graders scrap book. Finding the right balance between a clean academic kind of page, and one that entices an action is an interesting topic. One that internet gurus have many followers paying many dollars to tell you about, affiliate marketing etc.
I have just recently added a testimonial page to my site and ‘the benefits of using a professional dog walker’ sales kind of page to my site to entice people to take action, because at the end of the day thats how we make money right, its not what I may personally like as far as style etc goes, but I have to overcome my own old thoughts and look more to grabbing everyone who enters my site who is a potential customer and kill the bounce rate. And sometimes flashy buttons and big red text is what does it, right.
However, all of that seems to be disconnected from what Google may use to rank your site, based on the quality of the site?! How can google interpret people’s views of a site, unless they take my Google analytics information to penalise me (if i have a high bounce rate for instance). Maybe that is why people don’t sign into google accounts when they surf or have pikiwiks instead of google analytics, so that Google has to estimate how much people like your site, or just base it on your information content, and don’t the number and placement of images or bounce rates into account when it judges your site for ranking?
Thank you, Joel.
I’m glad that Google is now Google instead of Backrub.
There are a lot of ever increasing factors that can influence where a page might rank in search results for different people at different places and different times. 🙂
Hi Bruce,
There is a lot of “noise” in social media. Not that there isn’t value in people being able to share their thoughts and ideas and have conversations with other. But there’s also sometimes a lot of value in those social signals, like people sharing information about a newsworthy event usually minutes or even hours before the media does.
One of the hard parts about web design is recognizing that while your website can, and in part should represent you and your preferences, it also should reflect what your intended visitors want to see.
Google collects an incredible amount of information about people’s searching and browsing history, and I’m not sure that they really need Google Analytics data to make some of the decisions that they may related to user behavior. Can you imagine how much information they collect daily based upon the Google Toolbar (without someone being signed in), for instance? Add to that the people who log into Gmail, and then go surfing around without logging out of their Google Account.
The google PageRank has generated a lot of confusion to webmasters when you measure the quality of a website compared to the quality of the backlinks it has gathered over time.
Even google after the PANDA UPDATE asked webmasters to ignore PageRank and concentrate more on on-page optimization.
Fine. Google is making it nearly impossible for infant online businesses to grow. When someone gets to your website and have a look at your PageRank, he will lose interest in your website and proceed to another one if your webpage has not gathered any PageRank.
That does not mean that the website does not have good content but may be the algorithm has not been updated or may be ……
So I am suggesting, in my humble opinion, that google should bring out a way of telling people about the qualty of a website and not to rely entirely on PageRank.
Eurocasino – Yes i agree. It is more or less impossible for the a new online business to grow unless you have large budget behind you. There should be a search engine for websites less than 5 years old then even the newcomers would have chance.
Hi Eurocasino,
PageRank is not a measure of quality or credibility, but rather of popularity based upon the links pointing to your pages. It’s only one of a couple of hundred signals that Google uses to rank your webpages.
The recent “Panda” update seems to be using a reranking approach where it takes an original ranking for a page (based upon relevance, PageRank, and those couple of hundred other signals), and then may boost or lower your ranking based upon some measure of “quality.” If you lost any positions or gained some because of Panda, it wasn’t necessarily because you lost any PageRank, but rather because Google is looking at things that actually appear on your pages. That’s why Google was telling people to look at the quality of their content on their pages after the Panda update.
Regardless of what the Google Toolbar might show for your site, if you create something engaging and compelling on your pages, people will stick around.
The Google Toolbar has always been something that is only updated 3-4 times a year, even though actual PageRank may differ from day-to-day or even hour-to-hour.
Hi John,
I’m not sure that I agree. I think the cost of starting a business on the Web is getting less and less expensive, especially with things like WordPress to use as a content management system.
Hi Bill, new information for me on ‘Number of Ads’ and ‘Toolbar PageRank’. Thanks for the info.
HI Andri,
You’re welcome.
Google’s location listing can be gamed. Get a PO Box address in the city you want to be listed for and give the Post Office address as the business location and the PO Box # as the mailing address. Done. That was enough to get a former employer a “presence” in several communities.
Google would do well to blacklist the addresses of all USPO locations.
I haven’t done it … ab initio, I don’t like the map listings at all (they push more relevant content down on the page … often below the fold) and I also don’t think I need to “game” Google like that. I’ve gotten good results with good content and intelligent SEO. (The ‘cake’ of intelligent SEO is content and site design … everything after that is icing.)
The best results I’ve ever seen came from internal links to relevant pages and thoughtful external links to relevant and authoritative pages. If you are going to do business in a city, you don’t have to create a video for that city: create a page praising that city and use it to link to a newscast or other video related to that city. I TRY for newscasts … the station URL is (apparently) always authoritative for that locale.
When Google sics the human review team on the site (bots can’t understand videos very well), they find relevant content. If I praised the city (and I ALWAYS did), I looked for a newscast that also showed the city in a good light.
If nothing else, this makes people feel good about where they are living and causes them to feel that we share their pride and congratulate them on having made a good decision in choosing to move there. In fact, since we like their city almost as much as they do, we must be nice people like them … so why not give us a call and at least get a quote? (appliance repair business serving 17 communities)
That was working well.
Then I left the company and the site was never completed for the rest of the service area, graphics that were ready to upload were never uploaded and new repair-related content was never added. When I left, the site was getting indexed several times a day (pages had gone as high as #7 in less than 10 minutes for our initial key phrase: “appliance repair cityname” ).
Now it’s hard to find the site without searching for it by name. Even so, until I turned the e-mail alerts off, G-Analytics was showing uniques up by a steady 10% most weeks. Ten percent, compounded weekly, isn’t all bad no matter what the first week, which was actually around 200 uniques, looked like.
All I had going into that site was a handshake agreement because I wasn’t certain if I could pull a commercial site off. So, 400 hours later, when I submitted a bill (as agreed) asking for a straight $15.00 per hour, I ended up with $300. You do the math.
Now, since there isn’t anything I’m particularly keen to sell at the moment, I’m trying to make a go of blogging about something other than “how to make a go at blogging”.
Hi Bill,
I believe that Google has stopped allowing people to list only a P.O. Box address.
They do allow you to show an area of service, and to mark your actual address as private. See this announcement from the Google Lat/Long blog:
Show customers where you’ll go with Google Places
A snippet from the post:
I think the Google Places listings make sense if a search does actually have some kind of geographical intent associated with it. They do push organic results further down on a page, or onto the second page. I have had success as well with getting pages to rank well for geographic related queries in organic results, but I think some maps results do end up being more useful for searchers.
Nice suggestion on a page featuring a city and a newscast.
Good luck on your blogging. There is room on the Web for many more subjects than about how to run a blog.
Thanks for the great article; I found it a good read. I heard that Google enforced stricter measures on credibility tbh, I could be wrong. I’ll have to make the pointers you’ve mentioned to my site. What do you think regarding facebook pages and credibility? Does it help?
Hi Daniel,
One of the reasons why I was inspired to write this post was because it was shortly after the First Panda update, and Google had announced that they were looking at a wide range of signals, including credibility. I found this paper from Microsoft which describes some of the things that they might do to rerank pages based upon credibility, and thought it would be interesting writing about what they were doing.
I don’t think that looking at Facebook would be much of a help, since they are a lot different from many of the other sites on the Web that rely to a degree upon traffic from Google.