One of the more interesting discussions about Google Glass I’ve seen recently was in a forum where one of the participants was describing his own homemade version of Google Glass, which he named “Flass” (if someone at Google happens to be reading this, you should send him a pair of Google Glass, just because.) What was really interesting was that he was using a MyVu display in his clone.
I call it interesting because Google seems to have acquired a number of the patents from The MicroOptical Corporation, which was the predecessor to MyVu. MyVu no longer appears to be in business, and according to LinkedIn, the Founder and CEO and CTO of MyVue is now the Director of Operations at Google X. Here’s a view of one of the pairs of glasses created by MyVue (MicroOptical):
Imagine recording your life, so that you can search through it, and play it back later. Things that you record through audio and video might be sent to your own personal search database where pictures you take might be processed. Images of faces may go through facial recognition software. Landmarks and objects might also be recognized as well. You might be able to write or speak queries like the following:
- What was the playlist of songs at the party last night?
- What were the paintings I saw when I was on vacation in Paris
- Who were the people at the business lunch this afternoon?
- How many books did I read in May?
It’s possible that you might be able to collect information like this, and have it associated with both your user ID and a digital signature to keep it from others, unless you decided to join with a group such as a family, or fire fighters, or co-workers, to create a shared data base for one or more events.
In the first part of this series, Google Glass Hardware Patents, Part 1, I looked at 5 patent filings from Google published over the past couple of weeks, about Project Glass. Those included (1) a closer look at the optical systems that Google Glass might use, (2) how a bone conducting system might provide audio to wearers, (3) enhancing a person’s vision in real time to do things like zoom on objects that might be hard to see, (4) using input from other devices such as a phone or laptop to run Google Glasses, and (5) a patent filing about speech input for commands and queries that you can run on your glasses.
One of the things someone joked about in a comment from one of my earlier posts about Project Glass would be running up to someone wearing the device, and triggering a search by voice command on the glasses. The last patent filing I mentioned above told us that the glasses would ignore commands from others. It’s good to see that someone at Google anticipated that potential problem. I’m surprised at how thorough the patent filings about Project Glass have been. I’m also impressed by the volume of patent filings that have been published by Google for these heads-up displays. The Google Glass Foundry workshops for developers started yesterday – I’m guessing that the developers who participated probably had a lot to see and discuss.
Google’s Project Glass has the potential to bring something completely new to consumers – wearable computing devices that could revolutionize how consumers interact with the Web. The augmented reality glasses aren’t yet available to consumers, and are a work in progress. Google is holding its first developer workshops this week, offering developers the chance to use the devices for the first time, so that they can start coming up with applications for use on the devices.
See the full 1919 Mutt and Jeff comic
Project Glass are heavily visually oriented and many of the demonstrations of the devices show off the ability for wearers to take both photographs and video while wearing the glasses. Chances are good that we see the different visual queries that Google Goggles offer, including object and facial recognition, barcode search, searches for landmarks and books and other types of things as well.
On Friday afternoon, I took a walk to the auto repair shop working on my car, about a mile and a half down the road. A phone alert made me aware of a Google Now card springing up to give me directions to the shop, and telling me that it would take me less than a minute to get there. I guess Google Now wasn’t looking at the accelerometer on my phone, or it would have realized that I was moving too slowly to be driving. I couldn’t help but think though how Google Now could be a feature that would work well in the heads up display that Google’s working on under the name Google Glass.
As we wait to see what kinds of features might be incorporated into Google Glass, it appears that Google acquired a patent first filed a dozen years ago, granted in 2006, and recorded at the USPTO on Thursday. The patent was originally filed by Agilent Technologies, transferred to a company in Singapore in 2006, and then to Intellectual Discovery Co., located in South Korean. Google was assigned the patent on November 16, 2012, and the transaction was recorded at the USTPO on January 8, 2013.
I’m on the second day of a trip to New York City, giving presentations at SMX East on both the potential impact of mobile devices to the future of search, and on how reputation and authority signals might impact the rankings and visibility of authors and publishers and commentors on the Web.
My first presentation was in the “local and mobile” mobile track of the conference as part of a session titled “Meet Siri: Apple’s Google Killer?” where I joined Bryson Meunier, Will Scott, Andrew Shotland, and moderator Greg Sterling in discussing the potential impact of Apple’s Siri and voice search on SEO and search.
When I read the title for this proposed session a couple of months back, I couldn’t help but start to draft a pitch to join in on the conversation. I’ve been carefully watching patents and papers from Google and Apple and others about inventions and interfaces that might transform the way we search in the future, and the way that people might share information and market businesses online.
Google Glasses have the potential to make a growing number of types of visual queries that are possible under Google Goggles into an important aspect of the future of search and SEO. They also may make advertising using location based services much more effective. Are you planning ahead?
Over the last three weeks, we’ve been seeing a stream of patents granted to Google involving their heads up display device, Project Glass. These include design patents, and utility patents that hint at things like a touchscreen on the side of the glasses, sonar sensors built into them, a visual display of sounds around the wearer of the glasses including direction and intensity. I wrote about the first two batches of patents in Google Glasses Design Patents and Other Wearables and More Google Glasses Patents: Beyond the Design. Google was granted another related patent this past week titled Methods and devices for augmenting a field of view this week, which “augments” the field of view of human beings by helping things that might be of interest stand out, even if they are beyond the normal view of a person in terms of distance or outside of a 180 degree peripheral viewing field.
Google’s Project Glass seems to be moving closer and closer to reality, with the granting of 7 more patents today. Last week, I pointed out 4 patents related to the project in Google Glasses Design Patents and Other Wearables. Of those, 3 were design patents filed to protect the look and feel of the glasses, and the fourth patent described a way of using an infrared (IR) reflective surface on rings or gloves or even fingernails to provide input for the eyeglass display device. The patents granted today include only 1 design patent, and 6 patents that describe some of the more technical details about how Google’s Heads Up Display might work.
The First patent is a design patent from inventors who worked on the three design patents granted last week, Matthew Wyatt Martin and Maj Isabelle Olsson (Mitchell Joseph Heinrich was a co-inventor of one of the earlier three).