Did Google sidestep a lawsuit with an acquisition of patents involving electronic phone payments?
One initiative that Google has been hard at work on is making it easy for people to make payments electronically by phone. The Google Wallet has been available as an Android app on some phones, and it looks like it’s been moving beyond the need to use near field communications (NFC) to make payments.
Last year, on September 8, 2011, E-Micro Corporation filed a patent infringement lawsuit against a group of defendents, including: Google, Inc., Samsung Electronics Co., Ltd., Samsung Electronics America, Inc., Samsung Telecommunications America, L.L.C., Sprint Nextel Corporation, Sprint Spectrum L.P., Nextel Operations, Inc., Sprint Solutions, Inc., Amazon.com, Inc., Best Buy Co., Inc. and BBY Solutions, Inc.
Imagine that a search engine might insert place markers into a web page, perhaps with the use of something like the new Google Tag Manager? These markers could enable a search engine to calculate how long it might take someone to read that page. A newly granted patent from Google describes why they might insert such markers (without really telling how how it might insert those), to determine the reading speed of a page.
The process described by the patent might try to understand how different features associated with a page might cause it to take less time or more time for a visitor to read a page. It would then use that understanding to predict how such features might influence the reading of other pages that don’t have markers inserted into them. These types of features could include language, layout, topic, and the length of text of those documents. These are all things that could affect traffic across the web or at specific websites.
Some days Google seems like it’s more of a science fiction factory than a search engine, developing products like driverless cars, and augmented reality glasses. An academic project at Berkeley adds another element to the mix – Robots. Robots that can help pick up commonplace objects around your home, and put them in their proper places.
A paper submitted to the IEEE International Conference on Robotics and Automation, to be held in Karlsruhe, Germany on May, 2013, describes the role that Googles visual search queries plays in helping robots understand the objects that they might try to pick up, before they do. In Cloud-Based Robot Grasping with the Google Object Recognition Engine, we’re told about cloud-based robots that can view objects, and send queries about them to version of Google Goggles on the cloud to learn more about those objects and the best way to grasp them.
Google Goggle’s is Google’s visual search app, which enables you to take photographs and send them to Google to potentially perform facial recognition searches, OCR searches for text in images, product and bar code recognition, recognizing landmarks and places and named entities, and more. I spent a few hours at my Mom and Dad’s house a couple of weekends ago taking pictures of almost every photo and painting they had on their walls, and seeing if Google Goggles recognized any of them.
Another feature that the visual search engine is capable of is recognizing objects, and the Berkeley team, with the assistance of James Kuffner of Google, appears to have achieved a goal that had eluded them in the past with the use of Google Goggles. From the paper’s introduction:
Google’s local search may be getting smarter one streetview scene at a time. A few years back, I jokingly made a robots.txt sign for my front door that had the following statement in it:
In the root level directory of a website, a robots.txt file containing those two lines would tell Google’s page crawling program not to index any pages from the site. On the front of a home in my small town, it might have gotten some odd looks, but that’s about it. I had expected at some point that Google would send a streetview car or two down my street, and I would have been able to write a blog post with a streetviews image of the front of my house with a title along the lines of “Google Ignores Robots.txt File: Indexes My House.” I ended up not leaving the sign up, but I’m second guessing that now that I know streetviews cars can read. 🙂
That really shouldn’t have been a surprise back then. I wrote a post in 2007 titled Better Business Location Search using OCR with Street Views which described how Google might use OCR to gather information from signs it takes video of for street views. The patent filing I wrote about really didn’t discuss how that information might be used, but it presented the possibility of its use. I suspect my real life robots.txt file would have been ignored back then, though the drivers of those cars had learned at that point that signs like “Private Street” and “Military Base,” were areas they couldn’t film.
Google was granted a patent last week that gives us a look at how information from street level signs might be collected and indexed by Google, and compared to online information about the same locations to try to “calibrate” and “score” any information about the places being listed in Google’s index. Here’s an image from the patent that shows at a glance the kinds of information it might attempt to read:
Voice Queries and Visual Queries and Automated Assistants
I’m on the second day of a trip to New York City, giving presentations at SMX East on both the potential impact of mobile devices to the future of search, and on how reputation and authority signals might impact the rankings and visibility of authors and publishers and commentators on the Web.
My first presentation was in the “local and mobile” mobile track of the conference as part of a session titled “Meet Siri: Apple’s Google Killer?” where I joined Bryson Meunier, Will Scott, Andrew Shotland, and moderator Greg Sterling in discussing the potential impact of Apple’s Siri and voice search on SEO and search.
When I read the title for this proposed session a couple of months back, I couldn’t help but start to draft a pitch to join in on the conversation. I’ve been carefully watching patents and papers from Google and Apple and others about inventions and interfaces that might transform the way we search in the future to one focusing more upon voice queries and visual queries and also transforming the way that people might share information and market businesses online.