The patent’s description begins by telling us how this wearable device would work:
A wearable device can automatically modify or destroy one or more targets in the blood that have an adverse health effect by transmitting energy into subsurface vasculature proximate to the wearable device. The targets could be any substances or objects that, when present in the blood, or present at a particular concentration or range of concentrations, may affect a medical condition or the health of the person wearing the device. For example, the targets could include enzymes, hormones, proteins, cells or other molecules. Modifying or destroying the targets could include causing any physical or chemical change in the targets such that the ability of the targets to cause the adverse health effect is reduced or eliminated.
In January, Microsoft introduced a new build of Windows 10, which it will be giving away for free for non-enterprise users running Windows 7 and Windows 8.1. One of the features on this update is a personal digital assistant that goes by the name Cortana.
You’ve likely seen Apple’s Personal Assistant Siri, which was featured on a number of celebrity enhanced advertisements, and you may have seen people writing about Google Now, which feeds you cards to give you information that it predicts you might need or want when that information becomes available. Cortana is Microsoft’s entry into the Personal Assistant field.
Cortana is supposedly “powered by Bing” and “developed for Windows Phone 8.1”, and it looks like an important feature in Windows 10. I’ve been having difficulties defining what “powered by Bing” actually means, except that it seems to imply that all of the questions asked to Cortana are answered by the Bing search engine.
The temptation was to write this blog post mostly in pictures, since it’s about visual representations of things, based sometimes on a combination of objects that were understood using object recognition, and virtual semantic images superimposed on those, learned of from a knowledge base.
Telepresence is not science fiction. We could have a remote‑controlled economy by the twenty‑first century if we start planning right now The technical scope of such a project would be no greater than that of designing a new military aircraft.
A genuine telepresence system requires new ways to sense the various motions of a person’s hands. This means new motors, sensors, and lightweight actuators. Prototypes will be complex, but as designs mature, much of that complexity will move from hardware to easily copied computer software. The first ten years of telepresence research will see the development of basic instruments: geometry, mechanics, sensors, effectors, and control theory and its human interface.
During the second decade we will work ‘to make the instruments rugged, reliable, and natural.
News came out in a Google Press release yesterday, Google to Acquire Nest, that Google had purchased Nest, a company focused on connecting things found in your home to the internet, including the Nest Learning Thermostat, and recently released Protect, a Smoke + CO Alarm.
It’s exciting to see Google venturing out into business lines such as the control and security of house hold items such as alarms and thermostats and lighting and media controls. What does it mean for search and knowledge collection? I don’t think it signals any less interest in running a search engine, but it does show off a growing interest in selling internet related hardware, which is an area of experience that Google has been lacking in, though with devices such as Chromecast and Google Glass, may be really useful in the future.
There’s a lot of press and blog posts circulating around the Web about Google’s multi-billion dollar purchase of Nest, including some speculation that it gives Google a legitimate stance as a seller of hardware.
Yesterday, Google’s CEO Larry Page announced that Andy Rubin would no longer be in charge of the mobile platform Android at Google,, but would be moving on to new challenges at the company. In the announcement, Page urged the entrepreneur and inventor to take “more moonshots please.” Andy Rubin brought Android to Google in 2004, but I’ve been wondering since yesterday’s announcement if we would see a different kind of Android delivered by his hands.
Rubin does have a history of enjoying tinkering with robots, and that seems to be an area that Google is quietly focusing upon. Regardless of whether or not the former Android chief is involved, do we need to add robots to the list of science fiction type endeavors Google is working upon?
There are rumors that Google will be opening retail stores sometime in the near future (some rumors point to next year). The question rises though, what will Google feature in those storefronts? Will Chromebooks be a kiosk filler item? Will we see Android based phones? Are Google Glass wearable eye glasses still somewhat far off? Might self-driving cars still face changes in state legislation? Google TV might be a possibility. Home entertainment systems running on Android Hardware could also be shelf stuffers. Or will Google pull out some surprises for us?
Some recent patent filings from Google provide some possible hints at what we might see in Googleshops (or whatever they might be called) at some point, if Google does indeed open retail shops.