After having transferred to this new domain name, I recently found that many of the places that linked to me before at the old address, which I have talked about many times, especially in one of the episode reviews of Another, were not up-to-date. One of those places was Technorati. When trying to update the listing, I kept getting a message that Technorati's bot could not access my blog's feed. Today, I am going to talk about how I fixed the problem.
A while back, I changed from a Linux server to a Mac server. Really though, the hardware was a Mac since the first year of this blog on Habari. The reason for this change was that the server software started working again, so I had no need for Linux. However, this year, I changed from brycec.dyndns.org to brycecampbell.me, so links became broken and my followers could not find me. I tried checking links that referred to my server and found that Technorati had the wrong address. When I tried correcting it, Technorati kept complaining that its bot could not access my blog's feed. Address changes can cause problems.
Okay, that is twice now that you said Techorati's bot could not find your blog feed. When are you going to tell us the fix? Everyone has a different situation, so I thought I would tell you guys how it came about to me, but since it is out of the way now, I will discuss the fix right now. Technorati was able to grab a screenshot of my blog and so was Amazon, so things did not really make sense to me about the complaints. However, on a guess, I edited my server's configuration to allow access to the DocumentRoot of the server by robots. Surprisingly, it did the trick for me and Technorati found my validation token. This certainly seems weird that they need access to DocumentRoot, but it kind of makes sense. Technorati needs access to the DocumentRoot.
Why do I need to give access to the DocumentRoot to Technorati? I do not want robots looking through it. Well, that is a nice feature of Google. A person can use the same file that keeps robots from traversing directories, which is called robots.txt, and add Allow fields along with Disallow fields. However, according to The Web Robots Pages, there is no such thing as an Allow field and that the robots.txt standard is not actively developed. My guess, and this is only a guess, is that Technorati does not recognize the Allow field that Google does because of the fact that robots.txt development is inactive and it only recognizes standard fields. Technorati probably only recognizes standard input in robots.txt.
To sum things up, Technorati must be allowed access to what the server specifies as the DocumentRoot, and probably does not read non-standard fields in robots.txt, which Google does recognize.
Did Technorati find your claim token easily? If not, did this post help fix things? Do you have any other possible solutions? Feel free to comment.
Use an app on your phone (e.g. Scan for Android) to capture the image above. If successful, you should be taken to the web version of this article.