Earlier this week, I was fortunate enough to be invited to participate in a panel presentation on the future of SEO, in Raleigh, North Carolina. The event was the first Digital Marketing for Business Conference (and it was filled with some great sessions). The presentation wasn’t a PowerPoint and pitter-patter type talk.
Instead, I was joined by Russ Jones (The Google Cache) and Jenny Halasz (Archology) in somewhat of a free-for-all where we each exhibited our (biased) thoughts about where SEO would take us tomorrow. Conference Host and Q&A Moderator for the session, Phil Buckley had purposefully constructed a presentation where he asked questions and had each of us take turns answering, and responding to the other panelist responses as well. There were disagreements on some simple topics, and disagreement on some more complex topics, and oddly a lot of agreement as well.
Advice on the Future of SEO
In our final take-aways, as Phil asked for them, we had the chance to impart advice on one thing that people should pay careful attention to, and one thing that our audience members should absolutely do. I had the chance to start, and recommended that when people create content for their pages and research keywords that they carefully pay attention to the audience for their sites and the objectives of the sites themselves.
A number of years back, I remember being humbled by a homework assignment crayon drawing by a friend’s son which listed what he was thankful for, and included his parents, his sister, and shoes that Thanksgiving. We take so much for granted that we should be thankful that we have. A few friends and I had gathered over my friend’s house, and we were all knocked somewhat silent by the picture when he proudly showed it off to his father. Thank you to everyone who stops by here to read, to learn, to share, and to add to the discussion. Thank you too, for the chance to share the things I find and the things that I learn from you all.
On Monday, I wrote about a recently granted patent from Google that described How Human Evaluators Might Help Decide Upon Rankings for Search Results at Google. Interestingly, this week Google was granted a patent that describes an automated method they might use to check the quality of specific sets of search results.
When Google responds to a searcher’s query, it presents a list of pages and other kinds of documents such as images or news or videos. The patent’s filing date is from before Google’s universal search but probably does a good job of describing something Google might do with web page based search results.
A Google patent granted last week describes how the search engine might enable people to experiment with changing the weight and value of different ranking signals for web pages to gauge how those changes might influence the quality of search results for specific queries. The patent lists Misha Zatsman, Paul G. Haahr, Matthew D. Cutts, and Yonghui Wu amongst its inventors, and doesn’t provide much in the way of context as to how this evaluation system might be used. As it’s written, it seems like something the search engine could potentially make available to the public at large, but I’m not sure if they would do that.
In the blog post Google Raters – Who Are They?, Potpiegirl writes about the manual reviewers used by Google to evaluate the relevance and quality of search results by parsing through a forum where people have been discussing their experiences as reviewers for Google search results and collecting information about how the review program works. It contains some interesting information about the processes used by people who have been working to provide human evaluations for Google’s results, including a discussion of two different types of reviews that they participate in. One of those involves being given a particular keyword and a URL, and deciding how relevant that page is for that keyword. The other involves being given two different sets of search results for the same query, and deciding which set of results provides the best results for the query term.
This past February, Google filed for a number of patents that describe aspects of how it might share information from one data center to another, and some of the challenges that entails. Google’s Yonatan Zunber, who revealed on his blog that over the past few months that he was the chief architect for Google’s social systems, including Google Plus, is one of the inventors listed on a number of the patents.
Just how are the nuts and bolts of Google’s data architecture pieced together, from its Web index to storing emails and photos, from user profiles and posts and responses in Google Plus to maps and photos and streetview images in Google Maps and Google Earth? Google has a number of data centers around the globe. How does the search giant efficiently move data from one data center to another? How does it back up the files and indexes that it uses? Where does all the user data go that Google collects when people search and browse the Web?
Google has shared some information about how they store and access data over the years in papers and articles like:
When you design a web page with fixed dimensions, set for a specific display resolution, sometimes visitors will arrive at your page with a higher web page resolution level. What this means is that there can be empty space showing in their browser window when viewing your page. There are other times when someone visits your page, and their browser window isn’t using their whole monitor display, and they might resize their browser to include a higher resolution level, which can then cause unused browser space to appear.
A Google patent application published this morning describes how Google might identify when such unused space exists, and include content within that space. The patent filing tells us that this content can include text, images, videos, animations, and other types of content that can be displayed in a browser.
In a blog post on the Official Google Blog yesterday, Google’s Kent Walker, Senior Vice President & General Counsel, announced that Google had been bidding on Nortel’s remaining patent portfolio and Google’s bid was selected by Nortel as the “stalking Horse bid” in an auction that is tenatively scheduled to take place in June of this year.
A stalking horse bid is an initial bid on the assets of a bankrupt company chosen by the company as a starting point for competing bids from other potential buyers of the company’s assets. The selection of Google’s bid as this starting point doesn’t mean that Google will end up with Nortel’s patents, but it does indicate that the search giant is pretty serious about acquiring them.
Nortel also issued a press release announcing the Stalking Horse Sale Agreement with Google, and we’re told in that annoucement that:
Note: This is an April Fools Day post. The post is a play on the fact that the Google PageRank algorithm was named after Google founder Larry Page, and that there isn’t an equivalent algorithm from Google named after Google co-founder Sergey Brin. With the exception of the link to the “Brin Rank” patent below, all of the links in this post are legitimate, and the post speculates upon what a ranking algorithm from Sergey Brin might look like based upon his history of research, and the increasing use of user-behavior data that Google appears to be looking at based upon the whitepapers and patents that they have published in the past few years.
Google finds itself in an interesting predicament with one of the core aspects of its search technology, PageRank, falling out of exclusive control later this year. Fortunately, Google was granted a new patent this week that looks like it contains a substitute that overcomes some of the weaknesses of Lawrence Page’s search innovation of the 90s.
PageRank was predicated on the assumption that the existence of links between pages on the Web was a signal that could be used to sort and rank pages on the Web, scoring pages on an importance scale based upon the links they received from “important” pages. An “important” page is one that has links to it from other important pages. As inventor Larry Page noted in the first PageRank Patent, Improved Text Searching in Hypertext Systems:
If you were asked to point out the patent that describes PageRank, and you went searching at the US Patent and Trademark Office (USPTO), you might quickly get confused. A little more confusion comes today, with the granting of a new patent on PageRank to Stanford University. I’ve also located the very first PageRank patent which I haven’t seen anywhere else other than in the USPTO information retrieval system.
There were a number of related patents about PageRank filed in the late 90s addressing different aspects of PageRank by Lawrence Page, and a stream of continuation patents that updated the originals. Many of the patents either claim priority over earlier patents, or state that they are continuations of some of the earlier ones.
The earliest filing was for a provisional patent (application number 60035205) which was never officially assigned or published, but was filed on January 10, 1997. Titled Improved Text Searching in Hypertext Systems (pdf – 1.7mb), the patent office information retrieval system contains a document it describes as “Miscellaneous Incoming Letter,” which contains the provisional patent filing and an appendix describing processes being applied for. It is highly recommended reading if you’re interested in the history of PageRank and Google.