I remember my father building some innovative plastics blow molding machines where he added a central processing control device to the machines so that all adjustable controls could be changed from one place. He would have loved seeing what is going on at Google these days, and the hardware that they are working on developing, which focuses on building controls into textiles and plastics.
Outside of search efforts from Google, but it is interesting seeing what else they may get involved in since that is beginning to cover a wider and wider range of things, from self-driving cars to glucose analyzing contact lenses. I was surprised to see a web page from Levi’s showing a joint project from Google and Levis on their Project Jacquard.
These details come from an anonymous source who also gave us a bit more details on the project. The report states there will be a new feature integrated, allowing users to outline specific areas of the image in order to directly target their searches.
In Google Goggles, one can only search the whole image, which has proven to bring plenty of discrepancies. Images often display plenty of distractions, background items and other objects that may throw off a search result. According to the sketch provided, the system will also be able to recommend retailers for purchasing products, as well as other details.
Furthermore, it is said this technology has also been tested in â€œwearable computing devicesâ€. This could suggest this technology may come to products like Google Glass and possibly even VR (or AR) headsets.
A Patent Application Published at WIPO today from Google, with the name Nanoparticle Phoresis by inventor Conrad Andrew Jason.
The patent’s description begins by telling us how this wearable device would work:
A wearable device can automatically modify or destroy one or more targets in the blood that have an adverse health effect by transmitting energy into subsurface vasculature proximate to the wearable device. The targets could be any substances or objects that, when present in the blood, or present at a particular concentration or range of concentrations, may affect a medical condition or the health of the person wearing the device. For example, the targets could include enzymes, hormones, proteins, cells or other molecules. Modifying or destroying the targets could include causing any physical or chemical change in the targets such that the ability of the targets to cause the adverse health effect is reduced or eliminated.
In January, Microsoft introduced a new build of Windows 10, which it will be giving away for free for non-enterprise users running Windows 7 and Windows 8.1. One of the features on this update is a personal digital assistant that goes by the name Cortana.
You’ve likely seen Apple’s Personal Assistant Siri, which was featured on a number of celebrity enhanced advertisements, and you may have seen people writing about Google Now, which feeds you cards to give you information that it predicts you might need or want when that information becomes available. Cortana is Microsoft’s entry into the Personal Assistant field.
Cortana is supposedly “powered by Bing” and “developed for Windows Phone 8.1”, and it looks like an important feature in Windows 10. I’ve been having difficulties defining what “powered by Bing” actually means, except that it seems to imply that all of the questions asked to Cortana are answered by the Bing search engine.
The temptation was to write this blog post mostly in pictures, since it’s about visual representations of things, based sometimes on a combination of objects that were understood using object recognition, and virtual semantic images superimposed on those, learned of from a knowledge base.