Some of the people who write patents for Google tend to stand out to me. One of those is Trystan Upstill. I noticed that he has published another one that looks really interesting, and worth reading. When I started following his patents, I read his doctoral thesis, Document ranking using web evidence which was really interesting, from the early days in his professional career. It is from before he was listed as the inventor of a number of patents, that I also found interesting. I’ve written about a number of patents he has participated in creating as well because they often focus upon Site Quality, and I learn something from reading them and trying to understand them. Here are posts from his patents which I have written about previously:
Using automatically generated location data, and software that can cluster together similar images to learn about images again goes beyond just looking at the words associated with pictures to learn what they are about.
A new patent application from Google tells us about how the search engine may use context to find query suggestions before a searcher has completed typing in a full query. Think of Google as a Decision Engine, focused upon bringing searchers more information about interests they may have. After seeing this patent, I’ve been thinking about previous patents I’ve seen from Google that have similarities.
Sura gave up on her debugging for the moment. ‘The word for all this is ‘mature programming environment.’ Basically, when hardware performance has been pushed to its final limit, and programmers have had several centuries to code, you reach a point where there is far more signicant code than can be rationalized. The best you can do is understand the overall layering, and know how to search for the oddball tool that may come in handy ‘take the situation I have here’ She waved at the dependency chart she had been working on ‘We are low on working fluid for the coffins. Like a million other things, there was none for sale on dear old Canberra. Well, the obvious thing is to move the coffins near the aft hull, and cool by direct radiation. We don’t have the proper equipment to support this so lately, I’ve been doing my share of archeology. It seems that five hundred years ago, a similar thing happened after an in-system war at Torma. They hacked together a temperature maintenance package that is precisely what we need.
The Future of Search is in Providing Knowledge to Searchers through a Knowledge Graph
To those of us who are used to doing Search Engine Optimization (SEO), we’ve been looking at URLs filled with content, and links between that content, and how algorithms such as PageRank (based upon links pointed between pages) and information retrieval scores based upon the relevance of that content have been determining how well pages rank in search results in response to queries entered into search boxes by searchers. Web pages connected by links have been seen as information points connected by nodes. This was the first generation of SEO.
We know that Google doesn’t penalize duplicate pages on the Web, but it may try to identify which version it prefers to other versions of the same page.
I came across this statement from Dejan SEO on the Web about duplicate pages earlier this week, and wondered about it, and decided to investigate more:
If there are multiple instances of the same document on the web, the highest authority URL becomes the canonical version. The rest are considered duplicates.
The above quote is from the post at Link inversion, the least known major ranking factor. (it is not something I am saying with my post. I wanted to see if there might be something similar in a patent. I found something closer, but it doesn’t say the same thing that Dejan predicts