Over the last year it felt like Google changes and algorithm updates were rolling out more frequently than my teenage son likes to change his underwear (sorry Carter… but I’m pretty sure none of your friends read my blog 😉
With Panda hitting with full force in March 2011 and subsequent updates released throughout the balance of the year, it finally felt like things were starting to stabilize again… until I read a very disturbing article on the Wall Street Journal website this morning.
The article was the summary of an interview with Amit Singhal who is a top search executive at Google, so I’m going to assume this is not just speculation or rumour (although I wish it were).
Google changes that sound inevitable…
“Over the next few months, Google’s search engine will begin spitting out more than a list of blue Web links. It will also present more facts and direct answers to queries at the top of the search-results page.”
So in other words, if I type a question into Google and they think they have a ‘factual’ answer, rather than show me websites that have worked hard to provide the answer, they’ll just give me the answer at the top of the SERP… based on all of the data it has gleaned (or ‘stolen’ should I say) from other people’s websites.
I think this statement really highlights my point…
“Under the shift, people who search for “Lake Tahoe” will see key “attributes” that the search engine knows about the lake, such as its location, altitude, average temperature or salt content. In contrast, those who search for “Lake Tahoe” today would get only links to the lake’s visitor bureau website, its dedicated page on Wikipedia.com, and a link to a relevant map.”
So if you’ve put your blood, sweat and tears into building information sites to attract visitors so you can then monetize them through other means, when these Google changes roll out you will be no longer needed.
I am sure Lake Tahoe’s Visitor Bureau will be happy to hear that Google has publicly stated they intend to steal their traffic for these types of queries.
Imagine the repercussions these Google changes will have for Wikipedia!
Their entire site is built around providing ‘factual data’… their traffic will drop like a rock.
But the question has to be asked…
Where did Google get all of this ‘factual data’?
Did they hire thousands of ‘experts’ to research and validate the data that will be displayed. No.
They compiled the data by scanning the websites they consider authorities… the same websites that will be cut off when they are no longer needed.
These “Google changes” are designed to increase their stickiness and keep people on their site longer but is that really what we want from a search engine?
I thought a search engine was supposed to give me an unbiased list of the sites most relevant to my query.
What do you think?