Do Twitter and Facebook help you climb up Google?

January 27, 2014

 

Of course you want to be found on the web. Of course you want to feature in good place on the Google results page. How you achieve that is called SEO, Search Engine Optimisation. SEO is a subject we have already addressed on the LIVErtising blog , as it is a cornerstone of any online presence.

One questions that keeps many people on their toes is whether you can leverage your Twitter and Facebook presence to boost your position on Google. In other words:

 Are Facebook and Twitter signals part of the ranking algorithm? How much do they matter?

There has been much speculation about this for quite some time now. The search algorithm Google uses to rank the web pages that may answer your question is a secret that is more safely kept than the real age and weight of any Hollywood actress, let alone George Clooney’s actual income. But, as it is obvious that a great contributor of your position in search results pages is the weight of pages linking to your own page, it appears to be a safe guess to bet on Twitter and Facebook as a source of “backlinks” pointing to your page.

Now, the best source to get an answer – or at least a hint – is of course Google itself. Takeaway advice: visit Google’s channel on YouTube – or even subscribe to it. There, Matt Cutts, who is software engineer at Google, regularly provides video answers to key questions. A few days ago, Matt answered the question we’re discussing in this post. Here are a few key elements before you watch the whole video:

Facebook and Twitter are crawled by the Google bots and treated like any other page in their web index to return search results like any other website. But Twitter RT or Facebook Likes for instance are not considered as special signals in the current state of the algorithm.

There are two reasons why this is so:

  1. In the past the Google bots have been prevented from crawling those social networks for some time, meaning that engineers are weary of including such signals in the algorithm only to discover they are unreliable – my comment is that this can only mean that those engineers have been trying to include them, only to discover the difficulty, and that more stability may invite them to consider them again.
  2. A second source of unreliability is due to the ever changing and fast changing updates on statuses and relationships on social networks, while crawling can only occur at separates points in time, leading to a gap between the index and the live web. A state of affairs Google wants to avoid. I must say I do not really get the point, as this is the case with the web as a whole, where potentially any page can be updated at any time, creating that mismatch. Refreshing your page regularly is even a way to invite the bots to visit your page more regularly. Can you help me understand this point, then?

Do not necessarily assume that because there is a signal on Facebook or Twitter, Google is able to access that. A lot of pages might be blocked, or there may be “no-follow” links, or something along those lines.

So, how come we can notice that elements that get a lot of likes often also happen to be boosted in their ranking? This is correlation, no causation, says Matt Cutts: great contents get likes and they also get backlinks, which separately yield good search positions. Back to Google’s mantra: to position high, take care of your contents – great contents, great results. Additionally, isn’t it a safe bet to develop your presence on Google plus?

Please comment below, after watching the video!

Like this? Share it!

No comments

Leave a Reply

Your email address will not be published.