Google’s Panda Update Months Later – What’s Changed?

Home / Google / Google’s Panda Update Months Later – What’s Changed?

Unless you’ve been living under a rock these past 5 months or so, you’ve probably heard of/been affected by the update codenamed “Panda”.

It all started at the end of February when Google started giving users the ability to “flag” poor sites with a Chrome browser extension.  They did this as a way to begin beta-testing a new update.  They wanted to see if the algorithm they developed found the same sites to be poor as regular users.  They found their algo was quite accurate.  Then, in April, using that user data and other indicators, they began rolling out the new algorithm which affected about 12% of the world’s websites.

Since those updates, Panda, as it has been named, has been refined a couple times, with the most recent change happening in Mid-June.  At that time, some sites have seen improvements however most webmasters are still reporting that they haven’t recovered since the first updates.

Some sites I’ve been working with have seen recovery, but not until changes were made.  This article explains how I came to my conclusions.

So what did we do to start improving?

First, let me tell you that this may not work for you.  My research shows that Panda is indeed site specific.  I have come to this conclusion because I work with a network of about 100 sites – all are very similar in content and form yet only a handful were affected severly.  Most took small hits in traffic and visitors while others actually improved.

So I started analyzing what pages were hit and compared them to pages which had retained their rankings.  From here I developed a model of an “ideal page” using metrics like keyword density to determine what the ideal page was.  I didn’t try to over-analyze the good pages.  Because I know most of the pages on the sites in question are virtually identical (they are template driven sites) so I ignored the things which are common across the sites.

Ultimately it did come down to keyword density for these pages- there was a definite range for good pages.  Pages which Panda removed were either lower or higher than that range.

So I started testing small – rewriting certain pages to hit that ideal ratio.  It didn’t take long for me to determine that this was indeed the case for these sites.  So we began rewriting all “low quality” pages to get them into the ideal range.

What are the results?

While it’s still early – we’ve only been at this for a few weeks – we are seeing improvments.  Within days of rewriting pages they appear to start showing up for long tail searches that they used to.  Traffic on most sites with rewritten pages has been improving.  The growth varies by site – smaller sites seem to “pop” more quickly while larger sites take a little longer.

For example, a smaller site has seen it’s traffic recover to pre-panda levels within a couple weeks while a larger site is still down 20% from pre-panda levels (but up 30% from the initial post-panda traffic) and is still climbing.

It’s as if Google is re-indexing the pages and re-checking each one individually against some measurement, then applying other algorithms against them.  For example, if a page passes, then it is reassesed for its link worthiness (among other things) then the site needs to be recalculated to account for the new internal links.  This in turn kicks in other algos which say “ok this site is worth more, lets index more pages”.  These pages are then measured and if passed, then link worthiness calculated which forces a site recalculation and so on.

I equate it to blowing up a balloon.  Each site as a whole starts as the empty balloon.  As you change to higher quality (or add higher quality) pages, it’s like taking a breathe and blowing it into the balloon.  It’s now bigger, meaning more pages can now be assessed.  When they pass the balloon gets another breathe.  This process contines until the balloon is fully inflated – the site has been fully re-assessed.

So What Now?

My feeling is Panda is the evolution of Google.  I say this not only because we’ve seen a couple refinements of the algorithm already, but because in Google’s own terms, it’s a learning algo.  That means it will refine itself as time goes on.

The good news is that, because it is such a comprehensive algorithm, it will only be run “occasionally”.  in other words, it’s not like your typical algo – it’s not always running in the background, adding and dropping pages.  The bad news is that, because it will only run occasionally – you could find your Google referrals take a massive drop “all of a sudden” and without warning, much like happened to those 12% of the sites that were hit when Panda first rolled out.

The other good news, however, is that like any algorithm – it is based on math.  This does mean that you can analyze to figure out what happened and adjust accordingly.  But in my experience, this analysis does take longer than before.

It used to be that you could do your own online research, find some common threads among the various webmaster forums, and start to piece together an overview of what happened.  This isn’t the case here.

That’s because, as I said above, Panda appears to be site specific – what one webmaster sees and does may not help other site owners.  What I did may not necessarily help others.  All I know for sure is it is working (so far) for these sites.  Until the next update that is….

Related Posts