Category Archives: Search Engine Optimization (SEO)

Search Engine Optimization tips and strategies and information, from SEO by the Sea, to help make web sites easier to find.

How Google May Substitute Query Terms with Co-Occurrence

But I’m a substitute for another guy
I look pretty tall but my heels are high
The simple things you see are all complicated
I look pretty young, but I’m just backdated, yeah

– Peter Townsend

When you search at Google, how easy is it to find what you’re looking for? Do you search again, but try different but related words if your first attempt doesn’t uncover pages that you find useful?

If I search for “car repair” and follow it up on a search for “auto repair,” I would suspect that I would see a lot of the same pages, but perhaps not in the same order. I would also expect to see local search results for both, and I do. The local search results aren’t in the exact same order either. Some words or phrases do make good substitutes for others though, as can be seen in the image below:

A comparison of co-occurring terms for 'french open' and 'frenchopen'.

Continue reading

Google Patents Anchor Text Snippets

Somewhere out there is a universe that looks exactly like this one, and appears to run exactly like this one. Except something’s a little different. A little off. It’s as if search engines took a left turn instead of a right turn, back in the early 2000s. Instead of using only using meta descriptions and possibly body text from web pages for descriptive text, or snippets, for those pages in search results, they learned a new trick. Imagine that the content surrounding anchor text in a link to a page was collected and evaluated based upon a quality score, and that this associated and usually descriptive text was used to generate snippets instead?

My thought on the possibility is that often anchor text doesn’t do the best job of describing a page, and often links to a page are from a third party who might not have the same interest in writing text that might make a good snippet for a page. But, Google filed a patent for such an approach back in 2003. And it was granted this week – so they pursued what was described within the patent for over a decade as well. The patent does mention that headings on pages might also be used as potential snippets for pages, and provide the following example: “Computers > Algorithms > Compression”. But that’s a small part of the patent. They don’t limit it to anchor text that a site might provide itself, like in breadcrumb trail navigation for a page.

There’s also a part to this approach that recognizes that many pages have more than one link to them, so a choice would need to be made as to the best “snippet” to show.

Continue reading

How Google May Classify Pages Using Hierarchical Categories in URLs

Google was granted an updated version of a patent this week that looks at how the search engine might use directories in URL structures to help it better understand the categories on a Web site, and to categorize new pages and directories that might be added to a site. The patent tells us that this might enable the search engine to add supplemental information to pages, such as advertisements that fall within the categories displayed upon the site.

Some other patents I’ve written about in the past shows that the search engines might be doing more with categories than just deciding upon which ads to show on a page

Imagine that you have a site about car parts, and you decided to organize the pages of the site first by car make, so the main categories on your site are different brands, and your second level of directories is organized by car models. You might then have sub-sub-categories that are organized by different systems within cars, such as “electrical,” “transmission,” “cooling,” “suspension,” and so on. URLs for a couple of your pages might look like:

Continue reading

Google Patents on Author Signature Values and Authority Scores

Last week, Google was granted a number of patents exploring different aspects of how documents on the Web might be ranked in part based upon topics identified for those documents and the expertise and/or authority of authors involved in the creation of the documents. The process also describes how Google might use different methods to determine the authority of multiple authors who may have worked to create the documents.

This sounds very similar to some statements that Google’s Matt Cutts made at the start of May in a video about What should we expect in the next few months in terms of SEO for Google?

Continue reading

George Bush is a Miserable Failure Again, in Google’s Knowledge Base

I’ve written about Google Bombs in the past, and how a bio page featuring President George Bush ranked highly on a search for “Miserable Failure” as a result of a Google Bomb, in a post from 2011 titled How a Search Engine Might Fight Googlebombing.

In a post from earlier today, Nemek Nowaczyk wrote a post Google Bombing the Knowledge Graph: Who’s a Liar? He noticed that on a search for “liar” (in Polish), Poland’s Prime Minister Donald Tusk appears in the knowledge base results for the search. Nemek sent me an email with a link to his post. Within seconds, I was typing one of the better known English language Google bomb phrases into a Google search, with a guess as to what I would see there.

knowledge-base-result-miserable-failure

Ok, so the top knowledge base result on a search for “miserable failure” wasn’t George Bush. But a smiling George Bush was close enough to be a “see results about” disambiguation knowledge panel result.

Continue reading

Google Files Patent for Understanding Multiple URLs for the Same Page

The great thing about HTML is that it’s so flexible and offers so many ways to do things. The worst thing about HTML is that it’s so flexible and offers so many ways to do things. I’ve looked at a lot of websites and I still see people doing things new ways.

An issue that’s often common to many websites is when a page on a site can be found at more than one URL. This might be done by a site owner for a number of reasons, and in a number of ways. It might be an issue related to a content management system that’s being used as well.

A patent application published by Google explores how the search engine might recognize when it finds a URL through a web crawl and another URL through a feed, such as a product feed, with both URLs referring to the same page, but those URLs are structured differently.

This seems like potentially a lot of work to me, and the patent filing has me shaking my head that Google might use resources to figure out duplicated content on a site, even if it potentially might enable the search engine to understand URLs and associated products and other information that it might identify better.

Continue reading

How Google May Rank Web Sites Based on Quality Ratings

Google was granted a patent this week that describes how web sites might be given quality ratings, based upon a model that looks at human ratings for a sample set of sites, and web site signals from those sites.

The patent tells us that the advantage of such an approach would be to:

  • Provide greater user satisfaction with search engines
  • Return sites having a higher quality rating than a certain threshold
  • Ranking sites appearing in search results based upon quality
  • Identifying quality sites without having a human view the site first

This patent was originally filed in 2008, and the use of quality signals sound similar to what Google has shared with us regarding the Panda Update. It’s more of a search quality “improvement” than a web spam penalty.

The patent uses blogs as a type of site that it can be applied to within its claims and description section. One of the inventors, Christopher C. Pennock was a Senior Software Engineer on Google Blog Search, according to an early 2009 SMX Session with him which discusses ranking signals in Blog Search.

Continue reading

Avoiding Misinformation While Learning from Search Related Patents

On May 1st, Google’s Head of Webspam Matt Cutts published a video in his series of Google Webmaster Help videos, answering the question, “What’s the latest SEO misconception that you would like to put to rest?”

For some reason, Matt decided to focus upon patents, with a video about people possibly placing too much faith in what is uncovered in patents related to search engines. To a degree, I agree with his response, but I was reached out to by a number of people who saw the video as something aimed specifically at me, since I write about search related patents so often. I felt that I had no choice but to respond. Here’s the video from Matt:

Jennifer Slegg gave me a chance to respond to Matt’s video, in her post, Matt Cutts Tells SEOs to Stop Worrying About Google Search Patents, and I appreciate her letting me say a few words there, but I wondered if it was enough. I reached out to Matt on Twitter, and he provided some of his thoughts about the video there:

Continue reading