I remember 10th grade keyboarding class in high school, which was a required course for everyone. I’ve typed a lot more characters than I ever expected in the days since, but I’ve been wondering how much longer people would be typing, or at least typing on a physical device intended just for typing. I’ve tried the “sliding” method of typing on my phone, with limited results. Hunt and peck still seems to work better for me, and I’m getting less “big finger” errors on my phone’s small virtual keyboard.
The picture above is from the “Bain Collection” at the US Library of Congress Prints and Photographs Reading Room. I’m not sure if it’s really the very first typewriter, but that’s what’s printed on the image, and that’s what the Library of Congress is calling it. The Bain Collection contains images from one of the earliest news picture agencies. While looking through the pending patents from this week, I came across this patent filing for a touch screen keyboard, from Google. Are we seeing the last days of physical keyboards approaching?
In the first part of this series, Google Glass Hardware Patents, Part 1, I looked at 5 patent filings from Google published over the past couple of weeks, about Project Glass. Those included (1) a closer look at the optical systems that Google Glass might use, (2) how a bone conducting system might provide audio to wearers, (3) enhancing a person’s vision in real time to do things like zoom on objects that might be hard to see, (4) using input from other devices such as a phone or laptop to run Google Glasses, and (5) a patent filing about speech input for commands and queries that you can run on your glasses.
One of the things someone joked about in a comment from one of my earlier posts about Project Glass would be running up to someone wearing the device, and triggering a search by voice command on the glasses. The last patent filing I mentioned above told us that the glasses would ignore commands from others. It’s good to see that someone at Google anticipated that potential problem. I’m surprised at how thorough the patent filings about Project Glass have been. I’m also impressed by the volume of patent filings that have been published by Google for these heads-up displays. The Google Glass Foundry workshops for developers started yesterday – I’m guessing that the developers who participated probably had a lot to see and discuss.
Google’s Project Glass has the potential to bring something completely new to consumers – wearable computing devices that could revolutionize how consumers interact with the Web. The augmented reality glasses aren’t yet available to consumers, and are a work in progress. Google is holding its first developer workshops this week, offering developers the chance to use the devices for the first time, so that they can start coming up with applications for use on the devices.
Project Glass are heavily visually oriented and many of the demonstrations of the devices show off the ability for wearers to take both photographs and video while wearing the glasses. Chances are good that we see the different visual queries that Google Goggles offer, including object and facial recognition, barcode search, searches for landmarks and books and other types of things as well.
A couple of weeks ago, a federal bankruptcy judge approved the sale of Kodak’s patent portfolio to a group of companies that joined together to buy them at a discounted price. The group included Apple, Google, Facebook, and others. There were more than 1,000 patents involved, related to photography, storing photos, and sharing photos.
It makes sense for Google to have been interested in those patents, considering their involvement in smart phones with cameras, and their work on Google Glass, where taking pictures and recording video will likely be one of its strengths.
Google’s patents have provided a great number of hints over the past 10 years about local search and how Google treats businesses and landmarks in Maps and Web results and elsewhere. I’ve been fortunate enough to have uncovered some of these patents and written about many of the algorithms and approaches that Google has used, including concepts like location prominence, location sensitivity, Maps in Universal Search, Google’s Crowdsensus Algorithm, and more.
I am going to be the keynote speaker at Local U Advanced, Baltimore, starting Friday night, March 8 at 7:00 pm through Saturday at 5:00 pm on March 9 (There’s an early bird discount of $100 if you sign up before Feb. 8th). This Local University presentation will be taking place in Hunt Valley, MD. There’s an amazing group of speakers lined up for the event, covering local, mobile, and social aspects of local search.
Will Google Plus show advertisements one day? If they do, how will they decide upon the ads to show different users of the social network? A 2010 paper, AdHeat: An Influence-based Diffusion Model for Propagating Hints to Match Ads (PDF), described one method of advertising on a social network that was actually tested on Google’s world wide (except for the US) set of Q&A type sites with the code name of Confucius. It also incorporated the Confucius User Rank into displaying those ads. The user rank approach to reputation scoring for Confucius, and for choosing advertisements for users of the system appears to be an method that would work well in deciding upon reputation scores for users at Google Plus.
A Google patent granted in early December, 2012, provides a different approach for showing advertisements and other content items to users of a social network like Google Plus. The patent makes it clear that while the approach described within it might be used for advertisements, it might also be used to show other content as well.
On Friday afternoon, I took a walk to the auto repair shop working on my car, about a mile and a half down the road. A phone alert made me aware of a Google Now card springing up to give me directions to the shop, and telling me that it would take me less than a minute to get there. I guess Google Now wasn’t looking at the accelerometer on my phone, or it would have realized that I was moving too slowly to be driving. I couldn’t help but think though how Google Now could be a feature that would work well in the heads up display that Google’s working on under the name Google Glass.
As we wait to see what kinds of features might be incorporated into Google Glass, it appears that Google acquired a patent first filed a dozen years ago, granted in 2006, and recorded at the USPTO on Thursday. The patent was originally filed by Agilent Technologies, transferred to a company in Singapore in 2006, and then to Intellectual Discovery Co., located in South Korean. Google was assigned the patent on November 16, 2012, and the transaction was recorded at the USTPO on January 8, 2013.
When we talk about indexing and crawling content on the Web, it’s usually within the context of pages being ranked on the basis of a number of signals found on Web pages that might be ranked in response to queries. Google has told us that the future of search involves Knowledge Bases, and the indexing of Things, Not Strings. Gianluca Fiorelli explored Google’s ideas of Search in the Knowledge Graph Era earlier this week.
A few years back, I wrote some posts about some Google Patents that explored how Google might be extracting and visualizing facts, and using Data Janitors to process that information and clean it up and sort it. Google was granted another patent this week that’s very much related, looking at how Google might understand locations for places collected from Web pages. One of the inventors, Andrew Hogue, gave this Google Tech Talk presentation last year: