These details come from an anonymous source who also gave us a bit more details on the project. The report states there will be a new feature integrated, allowing users to outline specific areas of the image in order to directly target their searches.
In Google Goggles, one can only search the whole image, which has proven to bring plenty of discrepancies. Images often display plenty of distractions, background items and other objects that may throw off a search result. According to the sketch provided, the system will also be able to recommend retailers for purchasing products, as well as other details.
Furthermore, it is said this technology has also been tested in â€œwearable computing devicesâ€. This could suggest this technology may come to products like Google Glass and possibly even VR (or AR) headsets.
Today we’re to understand – from an anonymous source, as it were – that Google may soon be releasing a new camera feature they’ve had in testing for some time. This feature would allow the user to use their standard Google Camera app to search for information based on what they’re able to see – but more than that.
Details given to Android Authority through an anonymous source state that the integration would include a new feature â€œallowing users to outline specific areas of the imageâ€ in order to directly target their searches. This would be an improvement on the current iteration of Google Goggles, which only allows you to search whole images.
I’ve noticed some stories about new visual search features available through Google that have been leaked out to the media from an anonymous source. And I also noticed a patent application filed by Google, which was just published which contains at least one image that is the same as one supplied to Android Authority, which makes me wonder if the anonymous source that contacted them was from Google.
The patent filing does describe how a person might take an image from a photograph and outline part of that image with a stylus or a finger, and have search features available to them like those available from within Google Goggles.
Back in 2011, I wrote the post The Future of Googleâ€™s Visual Phone Search?, which supposedly would offer the ability to provide mixtures of searches simultaneously on an image, of the types offered through the Google Goggles App. That type of rich visual search never materialized from Google. The types of search under that richer visual search patent filing from 2011, included:
- A facial recognition search
- An OCR search for text in the image
- An image-to-terms search system, which may use object recognition
- A product recognition search, which could recognize two dimensional images such as book covers and CDs,and three dimensional images such as furniture
- A bar code recognition search
- A named entity recognition search, which could provide information about specific people, places, and things
- A landmark recognition search, recognizing actual landmarks and possibly images advertised on billboards
- A place recognition search that might be aided by geo-location information provided by something like a GPS receiver
- A color recognition search, and
- A similar image search, which looks for images similar to the one that you’ve used as a query
A wearable device, that could be Google Glass, might use a camera to watch an object from a photo be outlined, and that outlining could trigger a visual search. based upon the part of the image outlined, and the search could be one of these specialized searches that we see from Google Goggles. You would ideally take a photograph, and then outline an object within the image, and that outlining would be captured by your wearable device.
The patent filing that provides more details on this rumored visual search is:
Object Outlining to Initiate a Visual Search
Inventors: Thad Eugene Starner, Irfan Essa
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving sensor data from a sensor on a wearable computing device and, based on the sensor data, detecting a movement that defines an outline of an area in the sensor data. The method further includes identifying an object that is located in the area and initiating a search on the object. In another embodiment, a server is disclosed that includes an interface configured to receive sensor data from a sensor on a wearable computing device, at least one processor, and data storage comprising instructions executable by the at least one processor to detect, based on the sensor data, a movement that defines an outline of an area in the sensor data, identify an object that is located in the area, and initiate a search on the object.
Thad Starner is, among other things, a technical lead manager for Google Glass and the director for the Contextual Computing Group at Georgia Tech Ifran Essa teaches at the Georgia Tech Institute for Robotics and Intelligent Machines and does Research on Video Analysis and Enhancement for Google.
The patent tells us that the advantages of the approach in this patent,is that outlining the object can speed up the search and “computing power may not be expended to perform searches in which the user is not interested.”
That makes it sound like it is something of a reaction to the multiple searches to be performed for the patent I wrote about in 2011.
I’ve used Google Goggles before, and it can be a little tricky to take a picture of one object for the app to function upon. and you may want to search for something in a photo that you’ve taken that you wanted to capture as part of a photograph, rather than using Google Goggle’s upon. And you may want to save that photo as a photo, which I don’t think you can do with Google Goggles.
This appears to require that someone use a wearable such as Glass (It hints that other wearables could be used) along with a camera from a smartphone to perform visual searches. It’s the first patent that I’ve seen from Google that requires a combined use such as that – it’s possible that there have been others, but I didn’t see them. Given how popular phones with cameras are these days, I don’t think it’s a problem that it combines multiple devices like this. and it seems to make Google Glass have more uses. I suspect that you can outlines objects on images that you didn’t photograph as well.