Googlebot Doesn’t Read Text in Images During Web Crawls Or Does it?
When I was an Administrator at Cre8asiteforums (2002-2007), one of my favorite forums on the site was one called the Website Hospital. People would come with their sites and questions about how they could improve them. One problem that often appeared was people having problems being found in search results for their sites for geographically related queries. One symptom for many sites experiencing that problem was that the only time the address of their business appeared on the site was in pictures of text, rather than actual text. This can be a problem when it comes to Google indexing that information. Google tells us they like text, and can have troubles indexing content found within images:
Google’s web crawler couldn’t read pictures of text, and Google wasn’t indexing that location information for their sites’ because of that. Site owners were often happy to find out that they just needed to include the address of their business in text so that Google could crawl and index that information, and make it more likely that they could be found for their location.
Another place that people sometimes use images of text instead of actual text is in navigation links on their pages. Since Googlebot can’t read the text in those navigation links, those pages sometimes don’t have Site links appear for them in search results. Google doesn’t use alt text that might be associated with those images to generate a sitelinks for a site.
The last patent I’ve seen about site links was How Google May Choose Sitelinks in Search Results Based upon Visual or Functional Significance (Updated) I noted in my post that the patent says that it might use OCR to read text in images, but that I had checked many sites, and wasn’t seeing Google do that. Here are the site links that show up on a search for “SEO by the Sea”
I had some hope over the years that Google might get better at indexing text that appeared within links, watching some things like the following happen:
(1) Google acquired Facial and object recognition company Nevenvision in 2006, and a few other companies that can recognize images.
(2) In 2007, Google was granted a patent that used OCR (Optical Character Recognition) to check upon the postal addresses on business listings, to verify those businesses in Google Maps.
(3) Google was granted a similar patent in 2012 that read signs in buildings in Street Views images.
(4) In 2011, Google published a patent application that used a range of recognition features (object, facial, barcodes, landmarks, text, products, named entities) focusing upon searching for and understanding visual queries, which looks like it may have turned into the application for Google Goggles, which came out in September of 2010 – the visual queries patent was filed by Google in August 2010, the nearness in time with the filing of the patent and the introduction of Google Goggles reinforces the idea that they are related.
But, Googlebot still doesn’t seem to be able to read text in images for purposes of indexing addresses or to read images of text used in navigation. I added the text “Google Test” to the following image and then ran it through a reverse image search at Google. The images returned were similar looking, but none of them had anything to do with the text I added to the image.
We know now that Google had been working upon a Query Image Search and has offered a Reverse Image Search since the summer of 2011. Here’s a flow chart from that patent:
The Future of Search is in Visual and Spoken Queries
So I’ve been asking myself when Google might start looking at the text on images, and in navigation, and reading that text and indexing it. Google has taken some other interesting steps involving visual queries and image recognition, and it appears that they have some competition.
A couple of months ago, I read a Fast Company article that shows how important it might be for Google to get better at indexing and retrieving images, in Inside Baidu’s Plan To Beat Google By Taking Search Out of the Text Era. I thought about how Google was doing in searches for images.
I found the following patents and thought they were worth sharing:
Method and apparatus for automatically annotating images – This one searches for similar images, and when it finds them, it may then use text associated with those similar images to create an annotation for the image originally searched upon.
Clustering Queries For Image Search – An image search may be performed to find similar images; the results of that search may be pre-grouped or classified based upon visual and semantic similarity and clustered together into clusters. Each of the clusters may be associated with search terms that might be associated with them to use as an annotation.
Even more impressive was this whitepaper from Google, which showed them making gains they had never quite reached before in recognizing different types of faces:
Building High-level Features Using Large Scale Unsupervised Learning (pdf)
I am hoping for some changes at Google, after seeing a patent that says that Google is aiming at being able to perform searches of images of documents and return matching results, where the text on the document being queried goes through OCR (Optical Character Recognition), and the words from the document are searched for to find matching documents on the Web (images of documents), which would mean that Google would start indexing images of text on the Web.
If it does that, Google might also start using images of addresses as the locations of the businesses that those appear upon as text. It also might start understanding text in images in navigation, and creating site links where it wouldn’t before.
The patent is:
Identifying matching canonical documents in response to a visual query
Invented by: David Petrou, Ashok C. Popat, and Matthew R. Casey
Assigned to: Google
US Patent 9,183,224
Granted November 10, 2015
Filed: August 6, 2010
A server system receives a visual query from a client system. The visual query is an image containing text such as a picture of a document.
At the receiving server or another server, optical character recognition (OCR) is performed on the visual query to produce text recognition data representing textual characters. Each character in a contiguous region of the visual query is individually scored according to its quality.
The quality score of a respective character is influenced by the quality scores of neighboring or nearby characters.
Using the scores, one or more high-quality strings of characters are identified. Each high-quality string has a plurality of high-quality characters. A canonical document containing one or more high-quality textual strings is retrieved. At least a portion of the canonical document is sent to the client system.
The claims section of this patent focuses primarily upon matching text from an image of a document with text on pictures of that document across the web. The description section of the patent provides a broader reading of it, where a document might also contain images of an object, of people’s faces, of entities and other things that it may try to match up between a visual query and a document on the Web.
The description of this visual queries patent shows a search system that might contain a lot of different visual recognition approaches, like the one I mentioned above that was possibly used for Google Goggles.
This patent does tell us how it might use named entity recognition as part of its process:
In some embodiments, named entity recognition occurs as a post-process of the OCR search system, wherein the text result of the OCR is analyzed for famous people, locations, objects and the like, and then the terms identified as being named entities are searched in the term query server system (118, FIG. 1). In other embodiments, images of famous landmarks, logos, people, album covers, trademarks, etc. are recognized by an image-to-terms search system. In other embodiments, a distinct named entity query-by-image process separate from the image-to-terms search system is utilized.
The object-or-object category recognition system recognizes generic result types like “car.” In some embodiments, this system also recognizes product brands, particular product models, and the like, and provides more specific descriptions, like “Porsche.” Some of the search systems could be special user-specific search systems. For example, particular versions of color recognition and facial recognition could be a special search system used by the blind.
Matching a visual query with a named entity could mean that search results could be returned quicker to searchers since if they are identified, they could be associated with an identifier, where other documents with the same-named entity have already been marked with that identifier See: Google Gets Smarter With Named Entities: Acquires MetaWeb.
I’m hoping that Google solves the “text as images” problem for addresses and site navigation. They are problems that have hurt a lot of sites.
78 thoughts on “Will Google Start Reading Text in Images on the Web Soon?”
Hi Bill, another interesting article, I have had similar thoughts on the matter over the years. I would suspect that this cannot be far off now. Facebook has some amazing facial recognition algorithm’s in place.
Bill if you are saying that they, Google, can Read and index this image URL connecting it back to the site URL thus acknowledge as to point of origin URL read NAP and thus not need to utilize the NAP schema… Then yes I could say in one argument that this could be a good thing… But my argument is as i was blogging I shared a photo of a family member holding a beach towel saying Naples Florida or say I take a family photo and add the text Naples Florida through photogrid… I see as potentially troublesome. Now my photo not only is based around my website but also to anyone looking to go to ( in this example ) Naples or Naples Florida. Remember I only expected to share my photo with my audience but Now…
Itâ€™s great to come across a blog every once in a while that isnâ€™t the same out of date rehashed material. Fantastic read! Iâ€™ve saved your site and Iâ€™m adding your RSS feeds to my website.
@Dr. Robert – I think thats a nice hint. I often where shirts wich say “Munich” but I am far, far away.
I think it’s not so important to make Google read text in images – just don’t use important text in images. Easy to say, but why should I use an adress image instead of text???
So Bill pls help 🙂 I don’t see a lot psotive ranking changes if it happens, but chances for lower quality serps.
I don’t see much in the way of rankings changes for people who have vacation pictures taken of them while wearing t-shirts that mention the name of a city; but for sites that have their business address in an image, it could potentially make a difference. For sites that use images of text in navigation, and don’t have site links, it could possibly make a difference as well. The focus of the patent is to help searchers find visually similar documents, and any time searchers will out, I consider that a good thing. I suspect that Google will perform tests to see if the quality of search results diminish after they start indexing text in images. I do think it is something that they will do, especially when it might help them capture appearances of named entities on the Web that they might have otherwise missed.
Thank you. Happy to hear you enjoyed my post.
I am not saying that Google can read and index text in images. I believe they have the capability, and now they have a granted patent that describes how they might. I am not saying that they would use this to replace NAP (Name, address, telephone) consistency across the web. Using consistent data is an important part of how local search ranks content, since it is based upon a semantic approach to indexing content on the Web (which is what Google Maps strives to do). I doubt that a photo of a family member holding a beach towel that might go up on your website would endanger the ranking of your site in Google search results; much like a blog post in text about a vacation to Naples Florida likely wouldn’t have Google thinking that you relocated your business to Naples. Yes, we do need to start thinking a little different about the content contained in photos, and how that content might be indexed. We’ve been moving in that direction for a long time, see the patent I linked to in the post about Google describing how they would annotate images with associated text from similar images found on the Web. Thanks for asking some interesting questions!
When you read about how Google can use Deep Learning approaches to caption (well) images, as Google described in one of their blogs here: http://googleresearch.blogspot.com/2014/11/a-picture-is-worth-thousand-coherent.html you start wondering how much of these technology they might be using while indexing content on the Web. They aren’t using all of it; but are they using some. They’ve had the ability to OCR text in images for years, but it’s possible that it might have been cost prohibitive to do or take too long. When we see apps like Google Goggles that can do facial recognition, object recognition, landmark recognition, transfer pictures of text to actual text; you begin to wonder when that might be transferred over to the Web. This patent is a sign to me that they have an actual interest in indexing content that appears on the web as text within images.
Is there robots.txt to prevent G from crawling images on your pages/site?
Hi Bill, have you tinkered with Google Drive’s OCR capabilities? Pretty reliable results and insights into how Google could potentially churn Googlebot’s digestions on the backend.
Thank you. I hadn’t tried using Google Drive’s OCR feature and didn’t know about it until you shared that post. It’s good to know about. I’ve needed that in the past, good to know about if I need it in the future.
If you put all of the images on your site into the same directory, and then you use robots.txt to disallow crawling of that specific Directory, that is something that you could do to keep Google from indexing those images. Just like if you have PDF versions of pages from your site (print versions), and you don’t want Google to index those, because you are concerned that they might be considered duplicates, you could place them in the same directory, and disallow that directory from being crawled.
Thanks Bill – I was actually researching earlier as to which company, of which Google had purchased, could be behind all this and see if there were any hidden jems within their patents outside of Google’s patent. We all know they had their own version but for some reason somewhere they acquired a missing ingredient here and there. Maybe you or someone can shed some additional insight…
That is why I mentioned NevenVision in the post. I did write a post introducing their patents – https://www.seobythesea.com/2006/08/google-acquires-neven-vision-adding-object-and-facial-recognition-mobile-technology/. Google also acquired some patents from Xerox that included some using OCR technology: https://www.seobythesea.com/2012/02/xerox-helps-google-fill-in-some-search-gaps-from-pre-web-to-post-panda/ I didn’t look through those to try to pinpoint one specific patent because, I’m not sure that there was one specific patent; and besides the visual query one I introduced with this post is relatively new, having been granted less than 2 weeks ago on November 10, 2015.
OK the T-shirts wouldn’t be a problem (of course not) but I see a lot of sites, where “partners” are linked with images, why that? Because the adress is safe in that image, for now. I am not sure if it is worth the work to read the images and decide if it makes sence to use the text or not.
>Capture appearances of named entities on the Web that they might have otherwise missed< makes sense .. thx
Good read. I have been trying to read more about Google crawling on images on websites. But, I don’t think they have come up with an algorithm for images. Although, images can be used in a creative way and using good designs with relevant text for websites will help to gain more traffic, just like social media! (A tweet with an image receives more engagement). Websites do work more on quality content and long tail keywords to gain more traffic. Image optimization will be an added advantage.
I agree Bill, I highly doubt holiday snaps or unrelated images would have a big SEO effect and you’re right, we are in the “content age” and people should be focused on all facets of their content strategy including images.
In regards to the holiday snaps, it got me thinking…
I wonder one day if there will be some form of relevancy factors.. size of text within the image, position of the text in the image, the list could go on…
I can see this being a good thing for so many industries, Thanks for the post!
It really does help to focus upon all aspects of the content on your pages, as you say. I tell people when doing audits of their sites to try to choose and use meaningful images that are relevant to what they do. For instance if the page is about using limo services to drive kids to the prom, show some high school-aged kids dressed up for prom getting into a limousine. The alt text could describe that as could a caption, and both would be very relevant for the topic of the page. There are other aspects of an image that might influence how well it ranks in an image search, and how much it helps the page it appears upon rank for what it is about. There were a couple of Microsoft patents I wrote about in the past that described how images might affect rankings of the pages they are on, and how aspects of the images might affect their own rankings, and I often keep these in mind when I optimize pages on the Web:
How Do Images Get Ranked In Image Search?
How Search Engines May Use Images To Rank Web Pages
It is worth spending time reading through all of the help pages that you can find from a Google or a Bing, including their many blog pages. It seems sometimes those hold information that doesn’t get circulated very well to the search community. The same problem was one that influenced me to spend time looking through all the patents I look at – to try to understand better how search works. Images can help content get responded to by readers better, and the old adage about a picture being worth a thousand words is often true – the right image can help people understand much more than throwing a wall of text at them. I hope you follow the links I included in my response to Lauren, which describe a couple of patents by Microsoft, and many of the factors they might use in ranking images, and the impact of images upon the pages those appear upon. I agree that image optimization can be an added advantage.
Yes, I was very excited to see the patent mention looking for information and images about named entities. I do believe that Google and Bing both realize how much knowing about specific people and places and things within their indexes, can help them give searchers things they are looking for. And yes, it seems like that is an aspect of processes showing up in patents from the search engines more and more frequently.
If Google starts reading text in images, that adds a whole new dimension to SEO. I think they should stay away. Stupid web designers will start going back to cheesy, nonfunctional websites full of images.
However logical this may be for image search, there is a fair amount of abuse in keyword stuffing the ALT image tag, and with this you will now also see the visual element of images abused for questionable SEO purposes.
Having lots of images on a site isn’t necessarily a bad thing. Having people post images on a site where they think visitors or search engines might be able to read text in those images can be a bad thing, since the search engines won’t read that text. Having the search engine have the capability to search for and find pictures of documents that contain the same text as something they are searching for means better search capabilities for all of us, which I think is a good thing. I don’t think that this will lead to the Web getting worse.
Keyword stuffing in places like alt tags isn’t a good idea at all, but I think one of the reasons that people try it, and try to get away with it, is that it isn’t visible, to viewers of a page. I hope that people won’t try to do keyword stuffing with lots of words in text in images, because it might look bad, and be a bad experience for visitors. I don’t think it will lead to people abusing their pages like that. It wouldn’t be good SEO if it did.
Hi Bill, I agree with you on that one, it would be to the detriment of the user experience.
Yes, someone attempting to spam text in images wouldn’t be doing themselves any favors.
If the google crawler will read text in images, then the all SEO people will do Image Marketing.
Thanks for the nice article, something new to hear.
It’s one of the great post. Really worthy for all of us. Looking further for more knowledgable articles like this.
Iâ€™m curious to find out what blog system, you are utilizing? Iâ€™m having some minor security problems with my latest website and I would like to find something safer. Do you have any suggestions?
I am using WordPress right now. Honestly, I don’t like talking about the security I use on my site in the comments on my site. There are people who specialize in web site security, and talking to one of them may be one of the safest things you could do, with some of the best results.
Hi JP Admark,
Thank you. I’ll be working on some more.
I’m not sure that too many changes would be necessary. At this point if you don’t try to depend too much upon text within images, and use images that add meaning to the pages they appear upon, your results are probably going to be most effective. We don’t know when Google might start using optical character recognition on text in images for visual queries. It’s safest to not rely upon Google using that technique until you might be certain that they are (which makes it something to test over and over again).
I just stumbled over your page and have spent about the last two hours going over your articles…great stuff!
This one in particular is timely for me as I’ve been working on local SEO a good bit (which is somewhat new for me) and although I’m careful not to keyword stuff articles or alt tags, I put in a good bit of effort geo-tagging my customers photos. Along with geo-tagging, I also add as much Exif info to the properties of the images as possible as I think (hope?) either Google looks at the info now (beyond name/lat/long) or they will in the near future. The thought that they may actually pull text from the image itself is an interesting progression of this.
Great site…I’ll be back on a regular basis!
Thank you. Happy to hear that you like my articles. There’s seems to be some indication that Google may be looking at Exif data from photos in some instances, like at sites such as Flickr, to learn more about photos that people are taking, so that they can do things like recommend photograph spots to people touring places using Google Now. No real sign that they are using that data for ranking content on websites, but it’s possible that they might look at it. For more on that, See: https://www.seobythesea.com/2015/05/google-patents-how-it-may-make-recommendations-for-spots-to-take-photos/ If Google is using that data that way, it may be using it other ways as well. Being careful not to keyword stuff text associated with your images is a good idea.
With good timing, Google have just released Cloud Vision API and the conversation has jumped forward a decade!
It’s great to see Google make this API available to App Developers, but the capabilities available through the API, such as facial recognition, object recognition and landmark recognition have been available through applications like Google Goggles for a few years. It’s hard to say that this is a sign that Google is making Googlebot more capable of reading images in Text, and indexing those on the Web. That they are sharing this tool is a good sign, and it may lead to more analysis of images in a way that helps train the systems involved. We’ll see where it leads; hopefully to smarter approaches to index non-textual content on the Web. Thanks for posting the link to the release of the API.
Such a nice and knowledgeable article. Thanks for sharing such kind information with us.
Very knowledgeable article. Thanks for sharing such kind information with us.
It’s definitely something many of us have been wondering why it hasn’t happened already. Services like Evernote already can read text in images. Granted, they don’t have the volume of images that Google works with, but I’m sure Google will find a way.
Thanks. Google already does a lot with webpages, like making available cached copies of those for surfers. OCR can take place on those copies of pages rather than at the time of crawling sites, which possibly would save Google some time and resources. That is a good question, “why hasn’t this happened yet?” I guess we wait and see.
Google has been working upon solutions to problems that seem to scale well; I don’t quite get why they haven’t been handling this issue yet.
That’s a wonderful article, rich of information. I was confused about crawling of images. Thanks for sharing this information with us.
I absolutely love your blog and find a lot of your postâ€™s to be exactly Iâ€™m looking for. Does one offer guest writers to write content for you personally? I wouldnâ€™t mind creating a post or elaborating on a lot of the subjects you write regarding here. Again, awesome weblog!
Thank you for your kind words. I don’t use guest bloggers, because I look at blogging as my chance to learn and educate myself on different subjects. Trying to put what I learn into words that others can understand is often how I learn best.
If this is implemented, and properly it will help improve search traffic to blogs and other websites. The facial recognition in place on Facebook should be evaluated to support this development.
This is very helpful post. very informative information.
Good to hear that you found this post helpful.
Exactly – Googlebot Doesnâ€™t Read Pictures of Text During Web Crawls. I think we need to put image alt tags for this. I do not have much idea but just read somewhere.
Anyways nice & helpful post!!
Great article! I’ve been suspecting this is the case for some time now. Can’t wait to see what’s next for Google. 😉
As google has app for reading text on images, Is this will affect to SEO ranking/website ranking? Please let me know.
Wonderful article and very informative.The idea about the crawling of images by bot was not clear to me.After reading this article I am very clear about this.Thank you for sharing this article.
Thank you. Happy to hear that my post was helpful.
I don’t think that the fact that Google has an app for reading text in images will affect SEO ranking for web pages, but it does show that Google is capable of reading that kind of content.
Thanks for the post.It will definitely be a revolution if google implements that.Optimizing images in ma blog hoping for the change
Its a wonderful article about google image Search. yesterday i was discussing with one of my friend about the Image search algorithm and the future of image search. If google can successfully implement the whole process it would be a revolutionary change for the user. My friend was seeking for a strange product which one we saw 1st time in a restaurant. it was not actually strange but though we do not know about it so its was a very strange to us. then we took a picture and started talking about image search and try to predict googles next steps . it was fun but today i when read it becomes clear concept for me. Thank you for your great informative article. Keep posting we will keep learning form you .
Google has had the Google Goggles App available for a while, which could possibly have identified the product you describe. It would be great to see that capability show up for photos on the Web as well. I think it would be a tremendous change. I guess we need to wait to see if and when Google starts reading and indexing text found on web pages. It will be interesting to see them start.
seems like a great opportunity for many seos , also this could really help photography sites
Well this is something interesting and as a blogger we must pay attention to image optimization,
Thanks for the post! I will research now on it!
Reading text in images will be great to here.Google might get updated it to soon.
It will be interesting if Google does start reading text found in images on the Web, and that could cause some fluctuations in rankings for pages on the Web if it starts happening. We will see if it does happen soon.
Hii Bill , A very informative post from your side…. Thanks for sharing this awesome post with us…
btw can you please tell me how can I use images for getting better ranking in Google or in any other search engines…. ??
Itâ€™s a wonderful article about the crawling and indexing of images by google bot before reading this article I was little confused but now I am satisfied about this.
Thanks for sharing such kind information.
Yes totally agree with you,Its really better to follow step by step process to get more productive results.Images too play a crucial role in SEO proper attention towards them result in fruitful results.Thanks for a quality post
Bruce Clay did some testing on this a while back – and found Google did have some rudimnetary ‘test in image’ recognition ability…
Bruce Clay did some testing on this a while back – and found Google did have some rudimentary ‘text in image’ recognition ability…
Besides accessibility for visually impaired users, I thought alt text helped search engines determine the content found with images. Isn’t that part of the point in entering alt text for your images when building any website?
That’s what I always thought. Plus, it will be tough to read text in images, because there are so many variable: fonts, viewing angles, low contrast, text size within the image, resolution, pixel density, and I am sure there are tons others.
Alt text is intended to help visually impared people understand what might be in an image, and search engines can use them to understand what a picture is about, and to index the picture for image search, and to rank the page for what the image might be about.
Optical character recognition, used to understand the text that is found in an image should be able to understand all of those different aspects of text within an image. It is likely more computatonally expensive than just indexing text found on pages.
Just new to visiting your blog, and just learning for SEO.
Is that image title have an impact to better SEO? Because what i read they only reading alt tags (for this time i guess).
The alt text, the file name, and a caption, and other text on the same page as an image (such as the title for the page) may all play a role in how the image is indexed by a search engine. A title attribute for an image is what results in a tool tip, and includes additional information for a visitor to a page, but doesn’t necessarily provide text that might help better understand what an image is about. Some other text associated with the image could possibly result in the page that image is upon ranking better for terms within alt text for the image or a caption or similar text.
So, Search engines are reading more than alt text, but there was some talk by people like John Mueller of Google that Google doesn’t use title attribute tags to index content associated with an image – that’s something that really has been known about and hasn’t changed.
For now google only use text that finds in “alt” images or text that is close to the image on the page.
Great article Bill, very informative, quality content! Kudos!
Thank you, Taylor!
Nice one. From the look if things, I thing Google is already in that direction. With Google translate, you can capture an image with text and Google will automatically read the text and translate it to your prefered language. Well if this is effectively implemented in every area, it would make a lot of sense and ease SEO stress while making Google standout the more.
Google has been experimenting with deep learning algorithms for quite some time. You should really expect it to read anything and everything it can on a website, including everything that is shown in images. Google has already demonstrated a mobile app that recognizes birds by species(!) in still images.
Google has had the ability to use Optical Character Recognition on text in images; and they may be capable of reading that text, but might not because of the computational cost. Deep learning may also make it easier for Google to read text in images, but again, it might possibly be too expensive for them to do in terms of time and computational cost.
You’re right, I hadn’t considered the computational cost for this process – I was only thinking about whether it was possible or not. Silly me, should’ve thought that through 🙂
For now google only use text that finds in â€œaltâ€ images or text that is close to the image on the page.
I’m a bit lost… for years now us web designers have been moving away from images wherever possible to minimize load times/create cross-browser compatibility and to avoid what was once considered spammy techniques (text-based images).
Feels a bit like a backwards step… sure the technology has been there for years (think Captcha readers etc.) but I just can’t see a positive reason for it? Not for web design anyway.
Why not regulate img title tags, alt tags and even filenames more?
Great article though, thanks for posting 🙂
Comments are closed.