Imagine recording your life, so that you can search through it, and play it back later. Things that you record through audio and video might be sent to your search database where pictures you take might be processed. Images of faces may go through facial recognition software. Landmarks and objects might also be recognized as well. You might be able to write or speak queries like the following:
- What was the playlist of songs at the party last night?
- What were the paintings I saw when I was on vacation in Paris
- Who were the people at the business lunch this afternoon?
- How many books did I read in May?
It’s possible that you might be able to collect information like this, and have it associated with both your user ID and a digital signature to keep it from others, unless you decided to join with a group such as a family, or firefighters, or co-workers, to create a shared database for one or more events.
Lifestreaming is More Than Picture Taking
Last Summer, Mashable told us that Google Glass Will Have Automatic Picture-Taking Mode, where they mentioned an email from Google Co-Founder Sergey Brin describing how he had set Google Glass to take a picture every 10 seconds, and upload those to a server, while he was on a road trip. It appears that this picture taking was only part of the story.
Google was granted a patent last week that shows how you can collect information on devices with cameras and microphones, including wearable eyeglasses like Google Glass. Once collected and processed, you could search via text, spoken command, audio recording, and video recording to find information from your lifestream (a term not used in the patent).
The patent was co-invented by Hartmut Neven, who is the Technical Lead Manager, Image Recognition at Google, according to his linked in profile. He came to Google via a 2006 acquisition, where Neven Vision brought both facial and object recognition technology to Google.
Inventor David Petrou is a Senior Staff Software Engineer and has been with Google since 2005. He lists Google Glass as the project he is presently working on at both LinkedIn and Google Plus. Jacob Smullyan joined Google in January of 2011, according to the resume on his website, and he tells us there that he’s been working on the Google Goggles project. Inventor Hartwig Adam also appears to have come to Google from Neven Vision as well and lists Google Goggles as the project he is working upon in his Google Plus profile.
Method and Apparatus for Enabling a Searchable History of Real-World User Experiences
Invented by Hartmut Neven, David Petrou, Jacob Smullyan, and Hartwig Adam
US Patent Application 20130036134
Published February 7, 2013
Filed: June 11, 2012
US Patent Application 20130036134
Published February 7, 2013
A method and apparatus for enabling a searchable history of real-world user experiences are described. The method may include capturing media data by a mobile computing device. The method may also include transmitting the captured media data to a server computer system, the server computer system to perform one or more recognition processes on the captured media data and add the captured media data to a history of real-world experiences of a user of the mobile computing device when the one or more recognition processes find a match.
This lifestreaming method may also include transmitting a query of the user to the server computer system to initiate a search of the history or real-world experiences, and receiving results relevant to the query that include data indicative of the media data in the history of real-world experiences.
This invention does seem to move Google into a realm of science fiction, where people can collect information and search for in like looking in an old photo album. Imagine a family reunion where attendees contribute their audio and video to a film shared with others about the event? Or a security team that can playback an incident they’ve recorded after doing facial recognition on everyone they saw between one hour and another? Will our definition of privacy need to be redefined? If Google doesn’t get there first with this type of lifestreaming recording, will others bring us to that point anyway?
When lifestreaming might be used
The patent tells us that there might be many ways to trigger recordings through a device like this:
A purposeful recording, where its use is triggered on and off by a user.
A location-based recording, where it is set to turn on at specific locations, such as upon entering Google’s headquarters.
A popular location recording, where the audio and video turn on at locations where lots of other people have recorded in the past.
An always on system, where a wearer doesn’t have to initiate the capture of media.
A person using this lifestreaming system could toggle back and forth between different modes of capturing information as well.
The patent also describes how digital signatures attached to recordings would be created, how information captured would be sent to a personal information database and analyzed, and how someone could then go on to search that lifestreaming information.
Google Glass has the potential to do a lot more than just snap a photo every 10 seconds. It can be the vehicle for creating a search engine of your personal experiences, and one that could be used for business purposes as well. This patent describes a way of using Google Glass as part of a lifestreaming search. Google Glass has evolved into more of an enterprise work device than a consumer device, which makes it questionable as to whether the kind of lifestreaming envisioned in this patent will ever come about.
What might Google do with Location history? I wrote about some other patents that use location history. These are about patents from Google that use location history:
- Google’s Mobile Location History
- Google Tracking How Busy Places are by Looking at Location History
- Google Lifestreaming?
- Google Patents Identifying User Location Spam
- Google Patent Granted on Mobile Location Detection
- Location Extensions Augmented Advertisements
Last Updated June 25, 2019.