Back in 2007, I wrote about a Yahoo patent describing how Yahoo! might crawl a webpage, and then recrawl the same page around a minute later to see if any of the links on the page had changed. It might do that to try to identify what it called “Transient Links,” or links that pointing to things like advertisements that might change on every visit to a page, which aren’t links that the search engine would want to crawl and index. The post is A Yahoo Approach to Avoid Crawling Advertisement and Session Tracking Links.
Google was granted a patent this week on a similar topic that looks at “transient” content on web pages. While this kind of content might include advertisements as well, that change regularly on return visits to page, it could also include things like current weather forecasts (Warrenton, Virginia, 40 degrees and cloudy) for example. That kind of content changes on a regular basis, but often has little to actually do with content found elsewhere on a page.
Google would want to be able to identify transient content so that it wouldn’t index pages based upon it, and it wouldn’t show advertisements that focus upon it either.
Continue reading “How Google Might Identify Transient Content on Webpages”
Apple’s latest phone has a slick voice control feature named Siri that lets you tell your phone to do a number of different things, and can even power searches that it will answer for you. There’s been some speculation that type of verbal interaction might harm Google because it would bypass the search advertisements that are Google’s primary way of earning money. Looks like Google isn’t taking that possibility lightly.
Will the future of searching involve speech based searches that we do on our phones, with results shown on our TV? A Google patent application describes the possibility.
Continue reading “Forget Siri: Google Voice Phone Searches May Display Results on TV”
Anna Patterson, Creator of Phrase Based Indexing
The builder of the largest search engine in the World during the first decade of the 21st century joined Google shortly after building that search engine, and possibly licensed the technology behind it to Google. She worked for Google for a number of years, creating a way of indexing pages based upon the meaningful phrases that appear upon those pages, looking at how phrases co-occur on pages to cluster and rerank those pages, using the phrases to identify spam pages and pages with duplicate content, and creating taxonomies and snippets for pages using phrases. This phrase-based indexing system provided a way to defeat Googlebombing, and to determine how much anchor text relevance should be passed along with links.
Then Anna Patterson left Google to start the search engine Cuil, which was supposed to be a Google killer. Except it wasn’t. Now she’s back at Google, and looks to be working on phrases again.
Multiple Generations of Patents involving Phrase Based Indexing
Continue reading “10 Most Important SEO Patents, Part 5 – Phrase Based Indexing”
Google acquired a number of patents from a company that’s presently suing a number of major developers of wireless hardware devices for patent infringement. The company is Gold Bridge Technology (GBT), and they tell us on their “Meeting the Challenge” page:
One of GBT’s most significant group of patents pertains to the UMTS W-CDMA Standard. All equipment manufacturers and service providers providing 3rd Generation (“3G”) wireless service adhere to the technical specifications set by this standard. GBT has a number of patents that are essential to this standard and offers for license its portfolio of UMTS patents.
GBT has at least two pending lawsuits in Federal District Court in the District of Delaware based upon a couple of wireless patents 6,574,267 and 7,359,427. Those patents both have the title,”Rach ramp-up acknowledgement.” The GBT Meeting page also tells us that their Random Access Channel technology (â€œRACHâ€) Ramp up and Acknowledgment is the most widely used of their technology.
Continue reading “Google Acquires Significant 3G Patents”
PageRank is a measure that stands for a probability that if someone starts out any page on the Web, and randomly clicks on links they find on pages, or gets bored every so often and teleports (yes, that is official technical search engineer jargon) to a random page, that eventually they will end up at a specific page.
Larry Page referred to this person clicking on links as following a “random surfer model.” Thing is, most people aren’t so random. It’s not like we’re standing at some street corner somewhere, and just randomly set off in some direction. (OK, I confess that I do sometimes do just that, especially when faced with a sign like that below.)
Imagine someone from Google waking up in the middle of the night, with the thought, “Hmmmm. Maybe we’re not quite doing PageRank quite right. Maybe we should be doing things like paying attention to where links appear on a page, and other things as well.”
Continue reading “10 Most Important SEO Patents: Part 4 – PageRank Meets the Reasonable Surfer Model”
Classifying Web Blocks
In earlier days of SEO, many search engine optimization consultants stressed placing important and valuable content towards tops of HTML code on pages, based upon the idea that search engines would weigh prominent content more heavily if it appeared early on in documents. There are still very well known SEO consultants who include information about a “table trick” on their sites describing how to move the main body content for a page above sidebar navigation within the HTML for a page using tables. I’ve also seen a similar trick used with CSS absolute placement in HTML, where less important content appears higher on the HTML page that visitors actually see, but lower in HTML code for a page.
Back in 2003, the folks at Microsoft Research Asia published a paper titled VIPS: a Vision-based Page Segmentation Algorithm. The abstract for the paper describes the approach, telling us that:
A new web content structure analysis based on visual representation is proposed in this paper. Many web applications such as information retrieval, information extraction and automatic page adaptation can benefit from this structure. This paper presents an automatic top-down, tag-tree independent approach to detect web content structure. It simulates how a user understands web layout structure based on his visual perception. Comparing to other existing techniques, our approach is independent to underlying documentation representation such as HTML and works well even when the HTML structure is far different from layout structure.
Continue reading “10 Most Important SEO Patents: Part 3 – Classifying Web Blocks with Linguistic Features”