Making Sense Of Buzz
There can be no doubt that social marketing has been the hot topic for the last few years, but many marketers still have reservations about seriously investing in the space. Whilst many of the reasons for such reticence can be quickly brushed off (it’s just for kids, it’s a fad, etc…) some deserve more attention. […]
There can be no doubt that social marketing has been the hot topic for the last few years, but many marketers still have reservations about seriously investing in the space. Whilst many of the reasons for such reticence can be quickly brushed off (it’s just for kids, it’s a fad, etc…) some deserve more attention. One such reason, which continually crops us, is the issue of measuring the effectiveness of social, and understanding how to make real use of it. There has been some great attempts to try to overcome such reservations recently, including the (UK) IAB’s measurement framework, and Nielsen’s Facebook work, but still such concerns persist.
It’s for this reason that a bunch of enthusiastic people set-up MeasurementCamp, an informal, open-source event, which was recently relaunched in London. I was lucky enough to be asked to speak at the first of the new MeasurementCamp series, and thought that I would share the topics discussed.
Working at a global media agency, one of the challenges that we face not only involves measurement and effective utilization of social media, but also trying to do so in a way that makes sense in multiple markets. This is particularly true in the area of buzz monitoring: due to regional differences, it is often difficult to use one single tool for all measurement activities, and so it was essential that we created a functioning, yet adaptable, process for analysing the data that comes out of buzz in order to make effective use of it, and this is what I’ll share with you now.
The first thing to be said is that the problem with so many buzz monitoring campaigns is that they concentrate on numbers (# of comments, tweets, posts, etc…), yet miss out a crucial component: analysis. Because essentially, numbers are meaningless without context and this is what this process aims to provide, and it does this by breaking analysis down into 5 areas: noise, sentiment, topics, where & who.
This is the absolute measurement of the number of conversations about a brand and its competition. Unfortunately, this is where much buzz analysis stops, but this should really just be the jumping off point. It provides a benchmark to measure future activity against. As time goes on, this is also a KPI against which other media activities can be judged. For example, we know from data and experience that TV should drive buzz, which should drive search volumes, which should drive site traffic & conversions. If the buzz doesn’t come, then you know that need to go back and look at the whole communications strategy.
This is probably the most contraversial of all the KPIs that buzz monitoring tools measure, as it is often hard for computerised algorithms to judge true meaning. We use Cymfony as our preferred global monitoring tool, which mixes human & automated analysis, on top of which, we feel, it’s essential to layer another layer of contextual analysis (by native speakers if necessary)*. Once this has been done, sentiment can provide data on which brand health should be judged. It can also, when layered on top of topics, provide early insight for potential reputation issues. It’s the canary in the mine for Toyota moments.
Buzz is, by its very nature, made up of thousands, if not millions of individual conversations. In order to make sense of these it’s essential to categorise them in order to spot themes. We do this by grouping conversations into topics so that we can quickly spot what areas of our brand’s universe that people are discussing. Is it corporate issues? Celebrity/sporting associations? Product faults? Retail experience? By grouping these we can judge the effectiveness of above the line activity: if we keep telling people our brand makes them smell like daisies, and all they talk about is how it stinks to high heaven, there’s probably a problem with the marketing (and the product). It also provides invaluable data to feed back to search teams.
When judging buzz, we don’t just need to know what people are saying, but also where they are saying it. If all of your conversations are taking place in tiny niche forums, then any positive coverage may not do you much good with a wider audience. Or vice versa, whilst your brand may not attract much negative coverage, if that coverage consistently occurs on major media platforms, you might have an issue. By assessing whether your messages are moving out of niche, aficionado spaces, and in what form, you can shape your sponsorship decisions accordingly (in order to tie your brand to themes & topics with particular associations.) Likewise, this can also shape your whole creative development process.
Tied into where is who. Essentially, this allows you to judge which individuals/publications are covering you, and whether their doing so is having any impact (in terms of engagement, tweetbacks, etc.) By judging this on reach and engagement, we can often discover that a greater volume of conversations is misleading, if those conversations are taking place between individuals without influence, whilst a competitor brand with favourable coverage by one mainstream journalist, might mean they’re actually winning the battle. This data can be shared with both PR (in terms of building key relationships) and media planning (whereby ads can be targeted at spaces where these individuals are congregating.)
Whilst different buzz tools measure different things in different ways, we’ve found that this process allows us to pull both global & local insights out of such data. And only by doing so can we ever hope to show that noise can often be worth its ‘weight’ in gold.
*Forrester have just released an updated version of their latest assessment of the major buzz monitoring tools though it should be mentioned that it is very Anglo-Saxon in its focus.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.