A recently granted Google patent explains how Google may find and recommend locations for people to take pictures at. It describes how it might use something like Google Now to recommend “photogenic locations to visit.”
The patent tells us:
The present disclosure relates generally to systems and methods for recommending photogenic locations to visit. More particularly, the present disclosure relates to prompting a mobile device user that a photogenic location is nearby based on clusters of photographs.
The article seems more filled with questions than answers, such as where Google is getting the menu information, and even why they are publishing menu information. I suspect that a lot of restaurants will be be begging Google for ways to submit their latest menus in the near future.
Knowing what the menu might look like at a restaurant might make the difference between whether you will dine there, or drive past. For example, if I didn’t know better based on word of mouth, I wouldn’t begin to suspect that the Inn at Little Washington, in the middle of nowhere rural Virginia, might be one of the best restaurants in the United States. Here’s part of their menu:
That phone in your pocket is filled with applications, with sensors to measure movement and the world around us, with communications tools that put us in touch with work, home, family, friends, service providers and strangers.
That phone in your pocket is poised to teach itself how to work better, based upon how you use it, which applications you run, and how you use it to communicate with others.
A patent granted to Google last week explores different ways that parts and pieces of your phone can communicate with each other to remember settings in different contexts, to re-rank information based upon location and time and place, under a mobile machine learning system.
Imagine, for instance, landing at San Francisco International Airport to visit your brother. As you step off the plane, your phone resets its location and displays time and weather information on its home page for San Francisco. You open your phone, and the number for your limo appears at the top, with your hotel next, and then your brother’s home number (it would show his work number if it were earlier in the day).
Imagine recording your life, so that you can search through it, and play it back later. Things that you record through audio and video might be sent to your own personal search database where pictures you take might be processed. Images of faces may go through facial recognition software. Landmarks and objects might also be recognized as well. You might be able to write or speak queries like the following:
What was the playlist of songs at the party last night?
What were the paintings I saw when I was on vacation in Paris
Who were the people at the business lunch this afternoon?
How many books did I read in May?
It’s possible that you might be able to collect information like this, and have it associated with both your user ID and a digital signature to keep it from others, unless you decided to join with a group such as a family, or fire fighters, or co-workers, to create a shared data base for one or more events.
Google collects information about where you compute from, and provides location based services based upon where you travel. To protect this information, and to use it to protect people from spam and scrapers, Google might follow processes to protect that information and to analyze it.
Post a review from Germany about a restaurant, and then 15 minutes later from Hawaii about another restaurant, it’s spam. Drive down a highway where the cell towers collecting information about your journey are located in the middle of Lake Michigan, it’s likely spam. If GPS says you’re in NYC, and you then connect via Wifi in Wisconsin a few minutes later, spam. This information may not even come from you, but rather from others that might impersonate you.
Google was granted a patent last week which explores how they could use location based data to identify spammers and scrapers. It would also put user location information in a quarantine, and possibly hide starting and/or ending points for journeys from mobile devices to protect privacy for users, and to explore whether or not the information is spam. The location information could be used by the search engine, and that detailed information about locations to keep some information from being used in location based services, or other services that Google might offer.
Google has a lot invested in knowing where you are. The future of search, and many of the services that Google offers is going to rely upon it being accurate, too. It can’t be off by 30 meters, like it might be with cell tower triangulation. It can’t rely upon a GPS system initially built for aircraft with multiple antennas. It needs to be able to work indoors as well as outdoors. Unlike the electronic navigation device below, it also needs to be really small.
The purpose behind a Global Positioning System, or GPS, is a satellite-based navigation system helping to overcome problems with previous navigation systems. We know that Google has used GPS in mobile devices to make it possible for a number of location based services to function.
I remember 10th grade keyboarding class in high school, which was a required course for everyone. I’ve typed a lot more characters than I ever expected in the days since, but I’ve been wondering how much longer people would be typing, or at least typing on a physical device intended just for typing. I’ve tried the “sliding” method of typing on my phone, with limited results. Hunt and peck still seems to work better for me, and I’m getting less “big finger” errors on my phone’s small virtual keyboard.
The picture above is from the “Bain Collection” at the US Library of Congress Prints and Photographs Reading Room. I’m not sure if it’s really the very first typewriter, but that’s what’s printed on the image, and that’s what the Library of Congress is calling it. The Bain Collection contains images from one of the earliest news picture agencies. While looking through the pending patents from this week, I came across this patent filing for a touch screen keyboard, from Google. Are we seeing the last days of physical keyboards approaching?
In the first part of this series, Google Glass Hardware Patents, Part 1, I looked at 5 patent filings from Google published over the past couple of weeks, about Project Glass. Those included (1) a closer look at the optical systems that Google Glass might use, (2) how a bone conducting system might provide audio to wearers, (3) enhancing a person’s vision in real time to do things like zoom on objects that might be hard to see, (4) using input from other devices such as a phone or laptop to run Google Glasses, and (5) a patent filing about speech input for commands and queries that you can run on your glasses.
One of the things someone joked about in a comment from one of my earlier posts about Project Glass would be running up to someone wearing the device, and triggering a search by voice command on the glasses. The last patent filing I mentioned above told us that the glasses would ignore commands from others. It’s good to see that someone at Google anticipated that potential problem. I’m surprised at how thorough the patent filings about Project Glass have been. I’m also impressed by the volume of patent filings that have been published by Google for these heads-up displays. The Google Glass Foundry workshops for developers started yesterday – I’m guessing that the developers who participated probably had a lot to see and discuss.