“SEO is Dead,” is something that you may have seen grace the headlines of a blog post or news article in the past few years.
Some have pronounced SEO as being dethroned by Social Media Optimization (or Social Media Marketing). Or that Personalized Search, or Google Instant, or Universal Search, or Google Caffeine, or some other search update has changed around search so much that SEO no longer has value.
I responded to one of those “SEO is Dead” posts earlier this year with a post about Good SEO. The author I was responding to questioned whether creating great content, using standards-based HTML and sharing that content with friends should sufficient for your site to rank well in Search Engines without SEO.
SEO isn’t dead, but it is constantly evolving as are searchers and search engines and the Web itself.
I’ve written many posts in 2010 that describe patents and whitepapers which hint at some of the ways that SEO has changed, and thought it might be worth spending revisiting some of the things that I’ve seen and written about.
Of course, there are many other things about SEO that have changed that I haven’t written about over the past year, but I’m going to use this post to start pointing out some things I’ve been thinking about.
This first post focuses upon a couple of concepts described in Google patents granted in 2010 – the Reasonable Surfer, and Semantic Closeness. There will be sequels covering other ways that SEO has been changing.
Links and the Reasonable Surfer
The early PageRank papers described how links from one site to another might influence the rankings of pages being pointed towards. In the simple version presented in those papers, there was a presumption that every link pointed out from on a page carried the same amount of weight, or PageRank, to the pages being pointed towards.
Hints from representatives at the search engines have been telling us that this may not be so for at least a couple of years. In 2008 Interview with Yahoo’s Priyank Garg, we were told that not all links carry the same weight:
The irrelevant links at the bottom of a page, which will not be as valuable for a user, don’t add to the quality of the user experience, so we don’t account for those in our ranking. All of those links might still be useful for crawl discovery, but they won’t support the ranking.
In a 2009 blog post on PageRank sculpting, Google’s Matt Cutts added a very interesting disclaimer to his post about the value of links and how Google may calculate that value:
Disclaimer: Even when I joined the company in 2000, Google was doing more sophisticated link computation than you would observe from the classic PageRank papers. If you believe that Google stopped innovating in link analysis, that’s a flawed assumption.
Although we still refer to it as PageRank, Google’s ability to compute reputation based on links has advanced considerably over the years. I’ll do the rest of my blog post in the framework of “classic PageRank” but bear in mind that it’s not a perfect analogy.
When I was searching through Google patents to write about this past May, I came across one that described a framework for calculating the value that links might pass along on a page based upon a mix of features about the links themselves, the pages those links appeared upon, and the pages the links pointed to.
Even though the patent, Ranking documents based on user behavior and/or feature data, was originally filed in 2004, it showed a considerable evolution in thought about links from the early days of PageRank.
The “random surfer,” described in early PageRank papers, might randomly choose any link that appeared on a page, or even get bored with the page and jump to a completely different page.
The “Reasonable Surfer,” of the new patent took a different approach, looking at where a link appears on a page, how relevant the anchor text used in the link is to the page it appears upon, how commercial that anchor text might appear to be, what color or font size might be used in the link, and more.
I describe more of those features in my post, Google’s Reasonable Surfer: How the Value of a Link May Differ Based upon Link and Document Features and User Data.
Many in the SEO industry had the sense that not all links were created equal before the patent was granted, but before the Reasonable Surfer, we didn’t have a good framework to use to describe how links might be treated differently in different situations. It’s 6 years after the Reasonable Surfer patent was filed.
Chances are that the Reasonable Surfer model has evolved as well.
Keyword Proximity and Semantic Closeness
Look on many SEO audits and site reviews, and one of the pieces of advice that you may see is that when you are optimizing for a certain phrase, the closer the words in that phrase are together when they appear on a page, the more relevant that page might be considered by search engines for that phrase.
That seems to make some sense.
For example, if you want a page to rank well for the phrase “ice cream” (without the quotation marks), it’s likely that this first sentence:
“I went to the store to buy ice cream,”
is more relevant for the term than:
“I went to the store to buy cream, and slipped on the ice.”
In May, I wrote about another recently granted Google patent that adds a twist to this concept of proximity, in the post Google Defines Semantic Closeness as a Ranking Signal.
The way that information is presented on a page may influence how close a search engine may perceive two words to be.
For example, I have a list using the HTML unordered list element <ul> with the heading “Saturn Facts,” which describes the orbital period of the planet, how quickly it rotates, its mass, its volume, and its distance from the Sun, as seen in the image below:
The list itself is considered a semantic construction, in that each item listed is equally as important to the title as any of the other items.
Even though “volume” is in the next to last item in the list, under this patent, it is considered to be just as semantically close to the word “Saturn” as the word “Rotation,” which appears in the second item on the list. That’s because, again, each of the list items is considered to be as equally distant from the title as any other.
The use of semantic constructions like lists put a twist on the concept of proximity, and how the closeness of two words might indicate that a page may be more or less relevant for a certain phrase.
Again, SEO is evolving.
More to come…
The second part of this series is Son of SEO is Undead (Google Caffeine and New Product Refinements)
The third and final part of the series is SEO is Undead Again (Profiles, Phrases, Entities, and Language Models)
Hello Bill,
we write about this “eternal theme” a few days ago:
http://soyseo.blogspot.com/2010/09/google-instant-tenemos-futuro-de-los.html
Google and the others search engines needs to use some “algorithm”, and the websites needs to be Google friendly… so, we have a hard work beyond.
Hey Bill:
Glad to see you put the chicken little “the sky is falling” heretics in their place. I caught the tail end of an exchange on “paid content” and their rant about how social media has eclipsed Google. I had to silently laugh to myself since what is hot and what is not is often canned in a day and chucked into the archives, the churn rate on social content is staggering.
Whereby, on the contrary, pages that are SEO’d properly get stronger with age (from the grandfather effect) if nurtured properly. I like the breakout provided on semantic closeness and also may I suggest the piece on the fuzzy set and semantic connectivity by Dr. E Garcia here – http://www.miislita.com/semantics/c-index-1.html
The rise and fall of relevance signals is constantly in flux. But by no means will social media replace search engines as the go to source for millions in shopping mode. Glad to see you braking down algorithms like folding chairs as always to share the fragments with those who can ingest them.
Most SEO’s unfortunately never pop the hood to see how things work, rather chase effect rather than causes, good to see you making the practical bridge for those who really should know how and why things work as well as what is most important (conversion) and not just rankings.
All the best as always and good to read some great material as always Bill!
Hi Soy SEO,
Nice to see your post, and your reference to technological doomsayers. I guess we are on the same wavelength.
One of the challenges of SEO is keeping an eye out for changes at the search engines, and anticipating how they might influence the behavior of searchers.
It’s one of the reasons why I like digging into patents and white papers so much, though it can take some time to do that, and to keep up with news from blogs, newspaper articles, and other sources of information as well.
SEO is constantly changing, and sometimes even small changes can have large impacts. Being able to adapt is really essential to doing SEO right.
My alternative working title to this post was “The New SEO 101.”
Yes you are correct SEO is evolving and that is what makes your blog so great. It give me lots of new ideas to test all the time. I have definitely noticed that semantic constructions impact ranking.
Evan
Hi Jeffrey,
I noticed that when many of the “SEO is Dead” posts I’ve seen come up, it wasn’t unusually for many people to respond, saying that SEO isn’t dead, but is rather evolving. But, even with those responses, I really wasn’t seeing too many people providing examples of that evolution.
I had the chance to meet Dr. E Garcia in person a few years ago, and he was pretty generous with his time telling me about some of the areas that he was exploring involving semantics and co-occurrence. Thanks for the link.
Search engines have been showing us in many of the patents and white papers that I’ve seen that they haven’t been ignoring signals from social media for years, but that’s only part of the things they’ve folded into their ranking algorithms.
Thanks. Hadn’t quite thought of deconstructing a patent as akin to stacking up folding chairs, but I like the analogy. Part of the fun is sharing, and being able to hear the perspective of others who might not have the patience to sort through the legalese and the technical jargon, but have plenty of experience with websites and search engines.
Chasing algorithms isn’t a fun exercise, but being able to learn about the algorithms, and try to understand some of the assumptions behind them can be.
Thank you, Jeffrey.
Dear Bill,
So very well said, SEO is not dead but evolving you cannot just go out spam 1000s of junk links and expect top position. Things have changed and will continue to change. Google has guys like Amit Singhal that know IR inside out and have worked more than a decade on the topic. Things are getting advance and complex, you just got to know what to do and how to do with time.
kind regards
Ernest
Thanks for this post. I’ve been building my website for a little under a year now and was doing some research on SEO and found this. Gosh how does someone like me a newbie learn and keep up with all the changing SEO rules? It seems that us newcomers need someplace to turn to to get advice on changing ways of doing things on the world wide web. Any suggestions or advice would really appreciated.
I have been in the industry for quite a long time, until now the conclusion i have had in the whole long span is that Google will keep evolving and the role of SEO ‘s will keep growing until and unless search engines are completely closed down..
Thanks for the amazing post anyways 🙂
The best place of a link will always be within content that is related to the anchor text and related to the target page. The value of this link can only be topped if google can figure out which content is more relevant (they are trying but I’m not satisfied with the current state yet), even if both sites are equally filled with semantically related words.
For example, when searching for “ice cream”, a page containing:
“…It was the first time I tasted this ice cream. It is the ideal desert…†(i.e. ice cream related review)
would be more relevant than a page containing:
“…Later we bought some ice cream and went back home. It was the best desert I had this week…†(i.e. personal post by an ice-cream lover)
So maybe one way people can think about semantic construction is to visualize their pages with “topic segments” and they put the phrases and expressions into those topic segments that are relevant to them. If search engines can find enough of these topic segments, they may be able to chart some algorithmic correlations.
A topic segment should not be construed as a bucket or a bag, however. A topic segment has structure and order. Some people continue to promote the bag-of-words-based Latent Dirichlet Allocation (LDA) model as a useful way to analyze page content but it’s not helpful in analyzing structured, ordered content.
A few examples of topic segments include: ordered lists (as you point out above), images (using alt= text), tables, microformats, and forms (using text labels). I doubt DIVs or SPANs could be used to define or demarcate topic segments.
I believe you’ve reviewed some patents (or applications) in the past that deal with segmenting pages (maybe coming out of Yahoo! and Microsoft). It night be interesting to revisit those reviews to see if they shed any light on this subject.
The thought that SEO is dead or Google is going anywhere soon is nuts, in fact with the push of Places into Serps SEO just got a nice kick to the forefront. A whole industry is going to pump now building reviews, managing citations and endless content to tickle Google’s fancy.
Bill, you are so right about SEO being an evolving thing. Social media is the latest buzz and might be a flash in the pan and might be here to stay…who knows. What I do know is that people still currently search (mostly on Google) for things/ services they want and SEO plays a major part in that. Social media probably will change the SEO landscape but it will just be another evolution rather than its death.
God I hope SEO isn’t dead! I’m just getting started trying to get my wife’s site higher for it’s search terms and if SEO is dead then everything I’ve been reading about how to do it is out the window. Things change too fast these days.
Yes, SEO is not dead but just evolving. That is definitely true. And I don’t think SEO will be gone altogether in the future because SEO is a necessity if you really want to be found and if you want to make sales.
SEO will never, I repeat never, be trumped by SMO or SMM. Neither of these alternatives deliver traffic that is even remotely as targeted. This I have experienced first hand and have abandoned efforts to use them. People on social networks are browsers only. If they are looking to buy, they still go to the search engines. At least, that has been my experience.
SEO is not dead – it is however, ever changing. Years ago I learned the hard way with a penalty from Google for having traded reciprocal links on state pages on my real estate website. Since then, SEO has evolved, but it is about relationships as much as anything. Great content and interacting on the web with others can get you true, natural organic links that are still extremely important. Ranking well in the search engines for me and for my business is so much better than social media it isn’t even close.
While methods change, SEO is certainly not dead!
Anyone the thinks SEO is dead has failed to consider that SEO and SEM are unmatched for their capability for “Influencing Consumers When Their Purchase Intent Is At Its Peak”. I know how to do that with SEO and SEM, but not with social media. For reference, I borrowed the phrase above from the title to an article recently published in Media Post’s Performance Insider
SEO is dead is simply a phrase used by those who do not understand the intricacies of the entire art of optimization. While we have come a long way from simply throwing out alot of blog comments, or submitting to directories, there are still a number of ways to get targeted anchor text back to your site, which I still consider the holy grail.
If you look behind the curtain at Google, you would probably find out that any SEO tactics that are spammy or disingenuous would likely not hold much water. If you do SEO correctly, and do your work with regards to on page optimization like your “ice cream” example, then you will probably end up way ahead of the rest of the pack that is still doing what is akin to a spamathon.
I actually enjoy it when people say that SEO is dead, because that is one more person who I am not competing with in the SERPS.
Like when people talked amout the millenium bug in 2000 and how the world was going to end and the same goes for SEO, the game is always changing in SEO and the introduction of more shopping results,instant search and thumbnails appearing when hovering over results just means you need to work harder and keep on top of your game, work hard get rewarded.
I feel if you have a site that is earning you good money work on it in terms of quality content and backlinks and your way to a long term passive income.
As SEO is evolving, it means greater chances.
I truly admire your perspective and so as Soy Seo’s.
In time, we can adapt to these changes.
Cheers to good SEO!
SEO will never die. People will always search for valued content pages that offer solutions to their problems and that what we offer.
Bill, I always enjoy reading your interpretations, but this one is one of the best, IMO. I’ve believed for some time that Google (and perhaps they’re not the only one) is much closer to semantic analysis capability than many people give them credit for.
In fact, I think some have labeled me as “unstable”, because I’ve said that I think we’ll see links lose most of their weight, and pagerank will essentially disappear, as that capability develops.
The semantic impact of lists, by the way, is something I never thought of. Dynamite stuff!
🙂 SEO is ‘undead’ I like it…..
Do we think ‘on-page’ optimisation is dead though? Is it ALL about the links?
So, then, when it comes to the relationship and proximity of words on a page to each other, what good does commenting on blogs and leaving your web site link have when the blog is not relevant to your keywords at all? Like this — honestly I found your site through a dofollow list and am trying to provide some link juice for a project I’m working on (for a client) which has nothing to do with SEO.
Now, as a developer and marketer, I’m interested in this topic and truly found the article informative and instructive. But I thought the issue of what the worth of such a link is, and whether one would consider it spam or blackhat or what have you, to go our purposefully placing links (which I think everyone must do to some extent) on blogs, is a natural topic this would bleed into.
So what good is this link going to do for my site, especially when I haven’t used traditional spam-ish methods like dropping a link in the comment when you can with your keywords as anchor text, which I don’t think anyone here would appreciate?
I’ve subscribed to this thread cause I’m really interested in this.
Interesting take on the way things are evolving – a very good read to get the mind going!
Thanks!
Dan, I don’t think on-page optimisation is dead at all, however the emphasis at this moment in time really does seem to be on the inbound links and I think that’s where most people are currently heading, paying less attention to actually optimising their own site.
Hi Evan,
Thanks.
Interesting to hear that you have seen semantic constructions impact rankings. I haven’t heard much from anyone who written about testing them.
Hi Ernest,
Google does have some pretty sharp people working for them, testing out new approaches and ideas, and advancing the way that search happens. Keeping up with some of those changes means that you may have opportunities that might have been missed otherwise.
Hi Fix,
Thanks.
A good starting point would be the blogs and help pages from the search engines themselves. After that, it’s not a bad idea to visit a number of SEO blogs, find ones that you enjoy reading and seem to have some level or authority or expertise, and subscribe to their RSS feeds.
Hi Usman,
I remember being invited to speak on an “SEO 101” panel for a conference a few years back, and being asked to discuss amongst the other panelists which aspect of SEO we would each like to discuss. I was a little surprised that everyone else wanted to cover topics that were the basics of SEO from 1999, and was told that the topics I wanted to cover were too advanced. Many of those “advanced” topics were around for at least 3-4 years at that point, like Visual Gap Segmentation.
A lot has changed…
Hi Andreas,
It sounds like the kind of semantic analysis you’re describing is the kind of work done by organizations like Powerset.
The reasonable surfer approach that was described in Google’s patent may not quite be looking at things on that level, but does cover a wide range of features that try to anticipate which link a visitor to a page might find more valuable.
Hi Michael,
I think you’ve described a good way to think about sematic closeness.
The visual gap segmentation approaches developed by Microsoft, Google, and Yahoo are similar in that they try to distinquish different parts of a page from each other on the basis of both the code behind the page (a DOM approach), and the actual visual gaps seen in a page, looking at things like whitespace, horizontal rules, and more.
The segmentation approach can be extremely helpful in circumstances like the front pages of sites that might contain information about different topics, like a newspaper or magazine site – where one section might be completely unrelated to another.
Great example of an image as a semantic construction. A search engine might consider the image file name, alt text, caption, and a window of text surrounding an image to learn more of what an image is about. It might then compare the image itself to other images on the same page to see if the image should be considered a dominant image on the page, or on a segment of a page.
Here’s a post about one of those Microsoft patents on Segmentation that I find pretty interesting:
Search Engines, Web Page Segmentation, and the Most Important Block
Hi Carl,
Thanks for the Google Places example. Google took a few steps to emphasize the importance of local search, including making local search results seem more tightly integrated into Web search results, and now there seems to be a lot more interest in ranking well for local search.
Hi Mike,
I agree with you. Social media has been a big part of the Web for years – before Facebook and Twitter, in blogs and forums and other sources.
When new resources like Twitter come around, and become very popular, chances are the Web is going to change and evolve, but most SEOs who have been doing SEO for a while should be comfortable with the constant changes and evolution of search.
Tomorrow, chances are good that another kind of social media will arrive that may impact SEO as well – maybe something more location based. But there’s still going to be a need for people like SEOs who spend time learning how such systems work, and understanding how they can fit into a marketing plan.
Hi brent,
Chances are that if you are learning from some good resources, then you’re building a pretty good foundation of SEO knowledge that you can build upon as changes do happen. If you can recognize some of the impact of changes to SEO and the Web before competitors do, then you may have an advantage that they don’t.
Hi Andrew,
I agree – I don’t think that SEO is going away anytime soon. It’s just that tomorrow’s SEO is going to look somewhat different than today’s.
Hi Mark,
I have seen a few instances where SEO and Social Media Marketing have been used very well together -especially on sites that emphasize community.
One that I’m thinking about is an organization that provides a service at a specific location, and they keep their potential customers informed of new events and changes very effectively through their facebook page.
Another uses both facebook and twitter to keep people who found them through search updated regarding their very frequent updates to their site.
In both instances, the sites were initially found through search, and many revisits happened regularly because of social media.
Hi Ryan,
Relationship building on the Web goes back even to the days before Search Engines where one of the primary ways of finding information online. It’s something that can be very helpful to SEO efforts, and can also be a good part of a social media approach.
I agree, it would be a mistake to trade in SEO for social media – I think there’s going to be value in understanding how search engines can help deliver visitors to your websites for a long time.
Hi Randy,
If we think about some of the many different intents that people might have when they go online, there are some that SEO helps with pretty nicely.
1. Finding information on a specific topic that one knows something about already – searchers may have a good sense of formulating a query that may be helpful in finding more information.
2. Finding information about a topic that someone doesn’t know much about – search engines are striving to provide useful query refinements that might be helpful when the subject of a search isn’t well known, and the searcher is engaging in discovery.
3. Information foraging, where a searcher is collecting information on a topic from a broad range of sites, and might be looking for authoritative resources on that topic.
4. Refinding information already found before – personalized search, web histories, and navigational type search results can be helpful.
5. Comparing web sites on specific topics – whether to determine who has the best prices, or products, or presentation of information, or service.
6. Finding how to solve a specific informational or transactional task.
7. Discovering local resources – such as the closest pizza place or auto mechanic.
8. With things like social search and realtime search, finding answers to questions from people who you may already know.
Search engines and SEO may not always be the best at all helping with all of these intents, but search is evolving to become a pretty good resource for most of them.
Hi Mike,
I usually find that most people who write about SEO being dead really don’t know much about SEO, and how it can help them. The problem is that those posts often paint a picture of SEO that includes web spam, keyword stuffing, submissions to thousands of search engines, and other tactics that really don’t define how effective SEO works. And they can work to spread that misinformation, sometimes to a fairly wide audience.
Hi Ricky,
At the time of the Y2K bug, I was working for the highest level trial court in the State of Delaware, helping to come up with plans to keep the Court from being negatively affected. With many months of planning, testing, and hands-on efforts to make updates to hardware, software, and processes within our databases and desktops, Y2K fizzled rather than exploded. It was a great experience overall – I did have 6 or 7 very old desktop computers that needed their bios updated (which I did on January 1, 2000), there were absolutely no problems – though some potential for serious harm.
SEO involves a lot of the same skills – investigating and gathering information, planning, implementation, and testing.
The web changes, as does search, but understanding those changes makes it more likely that you won’t be surprised by them, and that can really make a difference.
Hi Netviq,
Thank you.
Hi Bill,
I agree. I love pages that are rich information resources – I love creating them, and making it easier for people to find them.
I can’t remember how many tutorial sites that I’ve visited on the earlier days on the Web that helped me learn about technology, computers, and the Web, but being able to share that kind of content and information to people who can really use it is something that I strive for.
Hi Doc,
Thank you very much for your kind words.
According to the original deal between Stanford and Google on PageRank, Google holds the exclusive license to use it until 2011. Not sure what will happen after then, but it’s likely that Google has been working on alternatives to PageRank – or at least they should have been. And there are signs that they are looking increasingly as things like actual traffic to pages, user-based behavior and more.
I actually think the opposite approach, where someone might expect links to always carry some kind of weight, and for PageRank to continue to exist is the more unstable thought. 🙂
The semantic construction approach was something that floored me when I first read the patent about it. Since then, I’ve been seeing how it applies to other things found on a web page.
I mentioned images above, but revisiting it really quickly, there are ways that a search engine can use to try to gauge which image might be the most important one on a page, such as comparing sizes, location, the existence or non existence of thumbnails and multiple links pointing to it, the quality of the image, etc.
There are a number of elements that can surround an image that can provide information about it, such as file name, alt text, caption, information about the segment of a page that it appears within, a look at a window of text that might surround an image, and more. these things may be semantically related, and if the image is a “dominant” one on a page, it may also be related to other aspects of a page, such as the page title, the heading of the section it might appear within, and more.
Understanding the construction of an HTML element on a page, and possible semantic relationships on that page to other elements can impart meaning that might not be as well defined, otherwise.
Hi Dan,
I think on-page optimization is still important, but that we also need to think differently about it.
For instance, if a web page has multiple topics presented on it, like a newspaper page, a search engine may be less likely to return the page in search results after picking one work from one segment, and another word for a different and unrelated segment, than it has in the past. Or from ranking the page on a long tail query where some of the words in the query are from the main content of a page, and some appear in boilerplate found in a footter or sidebar.
Hi Eric,
A link from a blog comment might not carry the same weight as a link from the main content area of a blog post.
Let’s think about it in another way though. A thoughtful and meaningful comment on a blog post might help to create a relationship where there might be future links that may carry more weight, like a link in a blog roll, or a link in the main content area of a blog. Or visits from people who saw the comment and who might be willing to blog about something that you’ve written. If you think about blog comments as something that might create possibilities like that, rather than just as a way of gaining a link, than that might lead to something with more value.
Hi Mike,
Thanks. I’ve only really brushed the surface. 🙂
Hi Dave,
I agree – it seems that a lot of people are focusing upon links, often to the detriment of the content on their pages. Unfortunately, the quality of that content is something that can help attract links. A good link from a great resource can have more value than a very large number of links from lower quality pages, and sometimes the best way to gain that higher quality link is from creating great content yourself.
Great post Bill.
there was also a particular interview a couple of years back with (I think Bruce Clay?) talking to a Googler who more or less admitted that although the traditional pagerank system still transfers PR, Google decide which links they want to weight for rankings, and that’s pretty much a different thing entirely. it was before MCs statement above.
..wish I could find it again 🙂
“I actually think the opposite approach, where someone might expect links to always carry some kind of weight, and for PageRank to continue to exist is the more unstable thought. :)”
My thought too, Bill. Not sure if that means I haven’t gone over the edge, or implies that you’re as crazy as I am. I think I’ll go with the former.
Like many other realms, search is evolving, and at an amazing pace. Unfortunately, some find a comfortable spot, and make a conscious decision to stay there, even beyond the point that their own wisdom tells them it’s folly. I’ve done it before, myself.
I think you raise some interesting questions surrounding the semantic impact of images. Certainly it makes sense that any attributes would be seen in light of the surrounding text. From what I’ve seen, that is an aspect that hasn’t gotten any attention from a lot of folks, and could reveal a new vein of gold.
Hi Doc,
I think we’re both fine on the thought that something better, or different than PageRank is close around the corner. 🙂
It’s tempting and easy to keep on going with a set of processes that you’re comfortable with, and keep static about them, but I’ve had a number of past work experiences in situations where it’s been necessary to constantly work to make work smarter, easier, and more effective, and a good number of “a ha” moments looking back on using an old process that’s been improved upon, and wondering why the change didn’t happen sooner.
Looking at surround text for images makes as much sense as looking at surrounding text for links, in a hypertext link analysis – another example of how a sematic construction might use different factors when considering rankings. And, I believe I’ve seen passages in patent and white papers that support both.
Hi Kev,
I’m not sure on the Bruce Clay interview, but there were some substantial hints in other older documents on how things like the position of a link on a page might influence the weight of that Page.
For instance, Microsoft published a paper back in 2004, involving breaking a page up into blocks, and weighing the content of those blocks differently, and introducing a block level PageRank in that paper, where different blocks would have different pageranks. More about that here:
Block-level Link Analysis
A snippet from the paper:
Around the same time that paper was published, Google filed a patent application (Document segmentation based on visual gaps), Filed December 30, 2004)regarding visual gap segmentation as well, where they would break a page down into different blocks and different topics. The description used an example of how they might take a page that contained multiple reviews for different sites, and segment them so that each review could be individually associated with the business that it reviewed. Towards the end of the patent filing, they expanded the scope of the patent significantly, with this paragraph:
Google and Yahoo have also published patents describing how they might identify material on a page that could be considered boilerplate, such as information in headings and footers and sidebars, potentially repeated on every page (or on a number of pages of a site), that might not be giving as much weight as other parts of a page.
The implication behind Google’s segmentation and the boilerplate identification process is that links from some areas of a page might not carry as much weight as links from more important segments or sections.
It is true SEO is constantly changing. However, the basics that worked 5 years ago are still working today and the fundamentals of SEO didn’t really change that much. There are many new tricks & ideas that could be implemented into one’s SEO strategy… But even if you don’t care much about that, it’s still possible to get a medium competitive niche into hte top 3 – just with the basic stuff.
I’m a SEO begineer but after reading a bit about it it’s clear that you need a long term strategy. Make your on page optimization, quality content that get link, build links from different sources and quality.
I really do like these post on on-page optimization.
Hi Joe,
A number of the basics that worked 5 years ago likely still work today, but there have been plenty of changes since then, and if you aren’t aware of many of them, you may be missing some opportunities. One example is the opportunity to show up the main web search results via Universal Search with blog search, news search, image search, video search, and local search.
Ranking well in a medium competitive niche is fine, but having a local search result, a web page, and a video all in the top ten results for a particular query is pretty nice as well.
Some SEO firms may also be doing some things now that they were also doing five years ago, like attempting to calculate ideal keyword densities for specific queries for their pages, that are as much of a waste of time now as they were back then.
Hi Henrik,
Thanks. I agree with you. A long term strategy can be much better than just stringing along a few tactics together and calling it SEO.
Having a vision of who your audiences are, what you might provide for them on your web site, what kinds of language they use when they talk about what you offer, and figuring out how best to reach them, and how you might offer them something that stands out from what your competitors offer are all good starting points to begin with before you begin to undertake the kinds of tactics that you mention.
I have never felt that social networking would replace SEO but it definitely has had an impact on almost everything online. Luckily your site is my one stop shop for finding the latest updates and to learn how to adapt.
I wholeheartedly agree with most of the people here who are saying SEO isn’t and won’t be replaced by SMO. There’s just no way.
Bill – This was a great article. I think you hit the nail on the head when you say SEO is evolving. It’s my humble opinion that SEO will continue to gain importance in the online space. It’s so crtical.
I think people are forgetting the role that each piece of the SEO puzzle plays. They are parts of an interoperating and interdependent whole, and you can’t really have one without the other.
Inbound links to the site are, as it’s been said, like “votes” out on the Web that have been cast for the site. Let’s not worry about link quality for the moment. The point is, they are votes, but votes for what subject, what topic, what keywords? That is the question that Google has to try and answer because the fundamental paradigm upon which the entire search results delivery to the end user is built is still keywords. No matter how they try to refine that delivery based on habits or context, it all still operates on keywords.
On-page SEO, on the other hand, helps the search engines decide just what the page is all about, and which keywords it is going to rank that page for. It helps it to know where to cast the votes.
Now, the inbound links do this is well through a number of means: anchor text is first, then the surrounding elements like page titles and other contextual elements, including what keywords that page is ranked for itself. This is why a thousand one-way links all from pages that have to to with elephants to a page that’s all about mice probably won’t help the mouse page that much. They certainly won’t help nearly a fraction as much as if all those links came from other pages about mice or mouse related, pardon the pun (really NOT intended cause it’s really dorky), long-tail keywords.
Context is and will always be part of the equation. It has to be, or it makes no damn sense. Therefore, on page contexts should be and no doubt ARE considered as well.
I find all of this extremely fascinating, but I’m curious how significant of a factor this is. Not to say that I don’t think it’s wonderful that search engines can determine the semantics of lists and other type of HTML structures on a page, but I’m going to assume that simply writing out this information word-for-word on a page would be just as relevant to a search query as creating a detailed list. I can see how this may help optimize certain types of information or data sets, but I imagine this type of signal is more useful to search engines when information is NOT clearly laid out within the plain text of a page. In situations where a webmaster has little or no knowledge of SEO, it seems like this type of algorithm would be extremely valuable in determining the true focus of the content on a page. For the rest of us, we could simply re-write a lot of this using heading tags?
Hi Johnathan,
The point of looking at the semantic constructions on a page is to see if there might be a connection between some aspects of what is being presented by the way that it is presented. For instance, the headings (h1, h2, h3) on a page are semantically suppose to tell us something about what content is being headed.
But there are other structures on a page as well that can imply some relationships between the words that appear on those pages. Associating items in a list with each other, and with the heading or title for that list distinquishes the content of that list from other content on a page as well. Rather than just a big bag of words on a page, the way the page is constructed inplies semantic relationships that we might miss if we ignore structures like lists, and can give us an insight into the contextual relevance of words upon that page.
I enjoyed your article, especially on the keyword proximity. I’ve recently read an article about nofollow links actually helping SEO. I’m still not convinced and am wondering whether the links just help the overall keyword density. Do you have any opinion on this subject?
Hi Erika,
Thanks.
Google insists that it isn’t giving PageRank or anchor text relevance value through links that are no followed, but I’m not ruling out the possibility that if a lot of people are actually following those links, that those clicks and visits may provide some possible ranking value regardless of the nofollow on the link. Don’t think that keyword density has much to do with anything related to search engines these days, and never has. How frequently a term might be used on a page, compared to how frequently it might be used on other pages in Google’s index that contain it is another matter, entirely.