Belgium Still Doesn’t Understand The Internet
There hasn’t been much said over the past few months about the case of Belgium newspapers suing Google. If you haven’t been following the case, here’s what’s happening:
A group of Belgium newspapers has sued Google over its Google News service. The papers claim that both Google’s website cache, AND it’s snippet of articles are a violation of the country’s copyright laws.
It was announced today that Google lost the case.
It’s that these newspapers fail to see the value of being included in Google news. After all, Google doesn’t show the whole article, so it’s only acting as a method of driving traffic to the newspaper websites. That’s not even the main point of this case that astounds me.
The point I fail to understand, is why did this even come to trial? Are the newspapers unfamiliar with the robots.txt standard?
It seems to me that if they didn’t want their stuff being included in Google, they could have just told Google not to index it. Am I missing something, or does that just make too much sense?
February 13th, 2007