Lessons Learned from Using Google’s Tagging and Extraction Data Highlighter Tool

I recently found a patent with two Google search engineers, Joshua Ain and Justin Boyan, listed as two of the three inventors. Last summer, at Google I/O in San Francisco, they joined together to talk about some tools that can more easily help webmasters add markup for structured data on the Web. The patent appears to be for Google’s Data Highlighter, which was one of those tools.

It inspired me to try to add structured data markup to my website. A task likely to fail for a few reasons.

I hadn’t read the patent yet last night, and I hadn’t done anything to improve the patterns found on my site, to make them more consistent. In other words, I learned the hard way, much like most non-developers, and non-programmers would.

The video below is an introduction to a number of Google tools, including the Google Data highlighter.

Continue reading

Share

Is Google Going to Marry their Knowledge Base with their Search Engine?

Google has been answering queries with its search engine for over 15 years, and has been showing us it can answer questions with facts from its Browsable Fact Repository and/or the Google Knowledge Graph.

Might Google at some point bring the two together?

To a degree, Google has been merging some results, showing a set of search results (from the search engine) and a knowledge panel (from the Knowledge Graph) on the same results page. But you could say that those are separate and unique entities on search results pages.

jim-thorpe

Continue reading

Share

What Ranking Signal is Better, HTTPS or FOAF markup, when Searcher and Searched Author are Connected?

Recently, Google announced that they would be ranking pages higher in search results when those pages use a secure protocol of https. The Google Webmaster Central blog told us so through Google Webmaster Trends Analysts Zineb Ait Bahajji and Gary Illyes, in HTTPS as a ranking signal. The use of https doesn’t necessarily make a page more relevant or more important for a search, but it could help lead to a more secure web.

Google was just granted a patent for assigning some searched sites to be deemed authoritative for a query that someone they are socially connected performed a search for. This isn’t for all queries, but rather just some queries that Google might determine are “trigger queries,” or queries that are presently popular.

And it’s not for all searchers, but only searchers that are connected to each other.

Continue reading

Share

Has Google Decided that you are Authoritative for a Query?

At Google and Bing, both search engines have been experimenting with relevance and search. Both have shown profile photographs of people whom you may be connected to at places such as Google+ (for Google) and at Facebook (For Bing) in search results that include them. Both may have changed rankings for those pages as well.

Google was showing authorship photos in search results for some authors who had set up authorship markup on their Google profiles and their web pages. Google also showed profile pictures in search results for some pages authored by some people that didn’t actually contain any authorship markup as long as those pages or domains were linked to by the author’s Google profile page as “contributors to”.

The author profiles would sometimes appear in front of articles appearing in search results for content written by specific authors from those “linked to” sources.

My links from my Google profile.

Continue reading

Share

Google’s Browseable Fact Repository – an Early Knowledge Graph

In earlier days at Google, when you used to ask a question, you could sometimes get a response providing answers to questions such as:

“When was George W. Bush’s birth-date?”.

We knew that Google could answer some questions like that, even if it might have been challenging, but we didn’t have much of a clue regarding the existence of something like Google’s Knowledge Graph until 2011. The answers we would see would sometimes be regular snippets where a word such as “birth-date” might be bolded.

Our set of 17 “related patents” that I first saw mentioned in a patent I wrote about this past Tuesday, and which was granted on August 19th, appear to have been created by a team under Andrew Hogue who was tasked to create “an annotation framework” to index more objects and facts associated with them on the web, which he would discuss more deeply during the presentation The Structured Search Engine, which is highly recommended.

He also oversaw the acquisition of MetaWeb by Google and the introduction of 25 former Meta-Web staff members from the company into Google.

Continue reading

Share

Was Google Maps a Proof of Concept for Google’s Knowledge Base Efforts?

Not everything we read in a paper or in a patent from a search engine is something that happens in real life; but sometimes it is.

I like coming across a patent now and then that is dated but does a good job of describing something that happened as set out in that patent or paper.

The patent I’m writing about tonight was originally filed in 2006 and granted in 2010, and it provides a description of processes that I’ve seen first hand, and have used first hand to help people increase the number of visits they get to their offices or phone calls they get from future clients.

A Surveyor measuring land.

Google Maps a Proof of Concept of Knowledge Extraction

Continue reading

Share

Looking at Peer Document Titles and Anchor Text when Collecting Facts about an Entity

During a civil or criminal legal case, the prosecuting side needs to present evidence to a judge or a jury. Each individual piece of evidence doesn’t have to prove the innocence or guilt of the party being tried by itself, but the combination of that evidence has to meet a certain standard. For a criminal case, the standard is beyond a reasonable doubt. For a civil case, it’s a standard of more probable than not. So, criminal cases tend to require higher levels of confidence.

When Google collects information on the Web about an entity, for their knowledge vault, they want that information to be as trustworthy as possible.

If you’ve read anything about Google’s introduction of the knowledge vault, one of the points about it that stands out is that there’s a high level of confidence in the information listed. There is more confidence in the facts that are associated with entities than there might have been in the Knowledge Graph.

Continue reading

Share

Extracting Facts for Entities from Sources such as Wikipedia Titles and Infoboxes

There are a number of patents from Google, both granted patents and pending patent applications, that describe ways that Google might learn about entities and about facts associated with those by extracting the information from the Web itself instead of relying upon people submitting information to knowledge bases such as Freebase.

We learned from Google’s recent announcement that they would be replacing the Google Knowledge Base with their Knowledge Vault, and that supposedly brings a whole new set of extraction approaches with it that have high levels of confidence with them as to how accurate they might be.

It’s hard to tell exactly which approaches Google might be relying upon, and which ones that Google might have introduced through something like a patent that is no longer being used. But, it doesn’t hurt to learn some of the history and some of the approaches that might have been used in the past.

I’m blogging about a patent today that describes an approach that many of us have assumed that Google has been using for years to identify Objects or Entities and attributes about those and the values that fit those attributes.

Continue reading

Share

Getting Information about Search and SEO Directly from the Search Engines