I took a look back at the posts here from 2010, and tried to decide which ones stood out for me in some way. These are some of my favorites from last year:
This post was inspired by Benjamin Franklin’s self help approach, and a look back at when I first started building and promoting web pages as an inhouse webmaster and SEO. In addition to the 13 areas that I choose to concentrate upon to become a better webmaster/SEO, there were a lot of good suggestions in the comments that follow the post.
Keyword research can be a chore, but it can be pretty interesting as well. This post is about some of the methods that I use to expand my choices of keywords as I do research.
I took the bait and responded to a popular post questioning the value of SEO. It wasn’t the only time that I did this year, and it probably won’t be the last time.
Google started publishing a second generation of patents about phrase-based indexing. If you’re into SEO and you haven’t spent anytime with phrase-based indexing, then you may be missing something important.
Are the words at the top of a page more important for SEO than the ones at the bottom? What does it mean for a search to be semantic? Google provides an example of how the structure that some content is in, such as items in a list, may throw some textbook Search Engine knowledge for a loop.
The original PageRank papers made it look like each link on a page would pass along the same amount of weight, and that assumption was one that many seemed to follow in linking and linkbuilding. That assumption has been challenged over the past few years, with statements from representatives from the major search engines that not all links are equal. Then this patent came out, and gave us a model for how links might be weighted differently.
Would Google enable us to see the way webpages looked and ranked in the past? The author of this patent built a search engine used on the Internet Archive’s Wayback Machine in 2003, and it’s possible that we may see a Google Internet Archive in the future.
One of the main limitations of a search engine is that sometimes there just aren’t good search results for some queries. Should Google tell us when we perform a search that the results we see could use some help? Is the process described in the post something that will help improve the quality of search results, or will it provide content mills with more information to create pages filled with somewhat adequate content and advertising? Interesting that two of the listed inventors are Google’s Chief Economist and their Head of Webspam.
The patent I wrote about in this post focuses upon how Google might determine that some websites might be related to other sites. What implications might that have for the value of the links between those pages?
When I wrote this post, I had been reading a number of posts on other sites about how Google is favoring businesses with strong Brands over other web businesses. I didn’t think those went far enough, and missed the idea that websites that can be associated with specific named entities (including brands) are being treated differently than sites that aren’t associated with entities.
Chances are good that Google is looking more closely at user behavior data, and collecting profile information about users, queries, and webpages to classify websites. What this might mean for a website is that it may be boosted in some searches, and lowered in rankings for others based upon those classifications.
My offline traffic woes turned into this post about navigation on the Web.