What are the standards for reporting blog surveys?

Summary: Why the articles published by The Guardian and The Times on the Blog Relations PR Survey are inaccurate (IMO); what are the journalistic standards for reporting a survey; a response to the comments of Hugh Fraser, founder of Blog Relation; why is important to discuss the limitations of blog surveys and how they are reported.

More: Blog Relations –a “content consultancy based in London”– has published the results of its PR Survey on September 26. I wrote (on September 27) about the fact that The Guardian (free registration required), reporting on the survey, missed that:

  1. the survey is based on a nonprobabilistic sample, so extrapolating the results to PR pros, other than the ones who took the survey, is a risky business (to say the least)
  2. the number of people who took the survey was very small – only 50.

The result was that the article was misleading, in my opinion, because the percents reported did not apply to PR pros in general, but only to those interviewed.

Of course, that’s not the fault of the survey’s authors. They didn’t had the intention of using a representative sample of PR pros; moreover, they stated clearly the number of responders when they published the results, and any journalist reading them would conclude that the survey is not based on random sampling, which is the norm for public opinions polls.

So, I said,

Kudos to Blog Relations for publishing the survey.

Kudos to the Guardian for writing about weblogs as a business communications tool; I hope next time they’re going to be more accurate about reporting stats.

On September 29 Hugh Fraser, one of the two founders of Blog Relations, wrote about how rapidly the word about the survey spread throughout the blogosphere and the mainstream media. He noted, among other things:

“By the time I had my breakfast, The Media Guardian was on the phone. I had a chat with their reporter, Dominic Timms, and he turned around a news feature article with accurate quotes and information by lunchtime. (emphasis added)

[…] At some stage, the newspaper with the longest history in the world, The Times had picked up the story. They didn’t mention the source of the survey, which was a bit odd, but it was a nice surprise all the same.”

Here’s what Holden Frith wrote in The Times Online about the survey:

“Businesses have been slow to respond to the threats posed by weblogs and equally slow to capitalise on the opportunities they present, according to the results of a survey of public relations consultants released today.

Nearly 60 per cent of respondents said that companies have not yet woken up to the risks, and 64 per cent said that a disgruntled employee or customer could cause significant damage to a firm’s reputation by posting damaging remarks on blogs – the online message boards and diaries which have become so popular in the past year.

A number of companies, including Ryanair and Land Rover, have been the subject of sustained, negative blogging campaigns, which have attracted the attention of the mainstream media.

It’s not all bad news, though. More than 80 per cent of respondents thought that either “quite a few” or “many” companies could benefit from the trend for blogging. They suggested that blogs provide customers with alternative means of communicating with customers and learning about their requirements.”

Again, no mention that it’s not a survey based on probabilistic sampling, and no word about the small number of respondents.

Sure, both journalists –from The Guardian and The Times– were careful to write about the “60% of PR executives interviewed” or the “80 per cent of respondents”, but this doesn’t put the numbers in the right perspective: there were only 50 respondents, and they were self-selected. This is what is missing from the two articles.

Again – no reproach for the survey’s authors; they were never contacted about the numbers included in The Times. As for how “accurate” was the information included in The Guardian, we’ll address that later.

At this point, I started to ask myself if I have a good understanding on how journalists are supposed to report polls and surveys. I couldn’t find the guidelines for The Guardian or The Times, but I found other materials that are pretty clear about what the standards are (I’m going to quote only the paragraphs relevant for this discussion):

(PDF) The AP stylebook: polls and surveys (Updated April 2002):

Stories based on public opinion polls must include the basic information for an intelligent evaluation of the results. Such stories must be carefully worded to avoid exaggerating the meaning of poll results. Information that should be in every story based on a poll includes the answers to these questions:

2. How many people were interviewed? How were they selected? (Only a poll based on a scientific, random sample of a population – in which every member of the population has a known probability of inclusion – can be used as a reliable and accurate measure of that population’s opinions. Polls based on submissions to Web sites or calls to 900 numbers may be good entertainment but have no validity. They should be avoided because the opinions come from people who select themselves to participate. If such unscientific pseudo-polls are reported for entertainment value, they must never be portrayed as accurately reflecting public opinion and their failings must be highlighted.)

3. Who was interviewed? (A valid poll reflects only the opinions of the population that was sampled. A poll of business executives can only represent the views of business executives, not of all adults. Surveys conducted via the Internet — even if attempted in a random manner, not based on self-selection — face special sampling difficulties that limit how the results may be generalized, even to the population of Internet users. …)

6. What are the sampling error margins for the poll and for subgroups mentioned in the story?

BBC Editorial Guidelines – Section 10 – Politics & Public Policy:

Reporting opinion polls:

When we report polls which do not reveal voting intentions we should always give the name of the polling organisation, the sample size, the nature of the sample and as much information about the margin of error and fieldwork dates as feasible.

Surveys:

We must conduct surveys, such as those of small specific groups like MPs or health authorities with care and must never report them as polls.

We must not mislead our audience about the status of the information. The remit of a survey should not be translated into percentages but reported in straight numbers.

OK, we’re not talking about a political opinion poll, but it’s still a poll; it’s not MPs, but we have a small numbers of respondents; and it’s not a BBC or AP article – but it’s still journalism.

Furthermore, it’s an interesting situation: the founders of Blog Relations, the authors of the survey, are journalists. They were in the situation of having the results of their (non-journalistic) research published in the media — not accurately, I’d say. But how do they feel about it, I asked them in a comment published on their blog:

Hugh, congratulations for the Blog Relations survey’s media coverage. I was wondering — are you OK with the fact that both The Guardian and The Times didn’t report about the limitations of your survey? Aren’t journalists supposed to indicate when the results of a survey can’t be projected outside the small sample of respondents? Shouldn’t they report the -small- number of respondents?

Here’s what Hugh responded:

Hi Constantin

“The Guardian wrote as follows: “the online survey, which the authors admit attracted responses from more blog-savvy professionals???. I think that describes the general character of our sample well (although it did include some PRs who did not know much about blogs).

I’ve already written that it was odd that The Times did not quote the source, so that anyone who was particularly interested could look it up and find out the details.

We will see bigger survey numbers from Edelman and MIT who are sampling the entire blogosphere – 17 million bloggers by some estimates. We limited our survey to one profession. I don’t know how many PRs there are in the world, but thankfully it isn’t anything like that number.

Each of the 50 PRs who took our survey was registered with us. We have verified their email addresses and know who they are. Some of them are quite well known in the profession. 22 left left on-the record comments which we have quoted by name (and some of which The Guardian picked up). Still more left URLS, which we have listed. This means that it is a very transparent and high quality sample – not just anybody passing by a website who had a few minutes to spare.

So the short answer to your question is no, I’m not losing any sleep over it. I won’t be ringing up The Guardian to complain that they reported our survey.”

In another posting, and without linking to my comment, Hugh takes a shot at my questions:

“we endured some sniping from a couple of PRs with blogs about the fact that only 50 of their profession filled in our survey. One of them even seemed to think that we should complain to the media about the coverage we received, because they didn’t mention the “shortcomings” – which coming from a PR is an original idea.”

Well, Hugh, you got it wrong.

I didn’t asked you to call The Guardian or The Times and complain that they didn’t manage to report your survey properly. Who knows, maybe it was something in the articles about the survey’s limitations (not “shortcomings”), but it was cut by the editors; maybe it wasn’t the reporters’ fault.

What I asked was how do you feel about the way your findings were reported. You‘re a journalist, you know what the standards are — tell me, were they respected in this particular case? I was hoping that you’ll be at least a little bit worried that the readers might get the wrong ideas from the articles quoting your survey.

But no — as you said, you’re “not losing any sleep over it.” Oh, well.

Now, let me try to respond to your comments.

The Guardian’s notice of your admission that the survey “attracted responses from more blog-savvy professionals” describes, indeed, the general character of the sample, but it doesn’t say anything about the limitations of your survey: nonprobabilistic sample, small number of respondents, no way to generalize the results.

You say that the survey had “a very transparent and high quality sample – not just anybody passing by a website who had a few minutes to spare”. That’s true; you were lucky. Still, it happened by chance, not because you had a method of selecting the participants.

“We limited our survey to one profession. I don’t know how many PRs there are in the world.”

No — you limited your survey to 50 PRs.

“We will see bigger survey numbers from Edelman and MIT who are sampling the entire blogosphere – 17 million bloggers by some estimates.”

Bigger numbers are not, by themselves, a guarantee for significant results. We’ll have to see how Edelman/Technorati are framing the analysis of their survey. The initial announcement is not encouraging, but I still hope they’re going to publish a serious discussion of their methodology.

One last bit, Hugh; you say:

Thankfully, there are a lot fewer PRs than there are bloggers, so I think that puts our survey on a par with Edelman/Technorati from the point of view of statistical significance.

Well, it would be sad to be true, if by “statistical significance” you understand that the survey’s results can be used as a reliable and accurate measure of bloggers’ (for your survey: PR pros’) opinions (to quote the AP stylebook).

Don’t get me wrong: I’m not saying that your survey is not valuable; it definitely has interesting findings. I agree that it’s difficult to do sound quantitative research, and that not all research has to be quantitative — one can get incredible insights from qualitative research. But that’s another story.

When people use terms like “survey” (a term that still carries a lot of weight), especially in the media, the assumption is that we’re talking about “scientific” surveys, based on probabilistic sampling. When that’s not the case, it has to be said loud and clear.

A good number of surveys -as the one authored by Blog Relations- are used by people who are trying to convince their bosses and peers to get aboard the Cluetrain, and they need sound research. If we’re not discussing about how these surveys are done, what are their limitations, how the results are reported, what’s the meaning of the results, and we’re just happy that we have some new “survey” or “study” that validates what we already know, then it’s not research — it’s folklore.

9 Comments

  1. Pingback: hypocritical

  2. Constantin,
    Thank you for posting this rich and very well put blog entry – I will henceforth use it when I teach e-methodology. I just want to add that, in my opinion, this case is only one drop in the growing ocean where established main stream media misuse survey/research/poll/etc results. This is particularly interesting as many of them defend their existence as offering “investigation, critical analysis and the wider perspective of events”. At the same time they believe they will withstand grassroot journalism (e.g. bloggers) because the latter are not trained in “journalism”. Furthermore, even worse might be that the misuse works because the audiences do not know about/understand the problem here – another indication of the failure of the modern project and the dominant system of “education”. Mind here, this goes well beyond the poorly educated mass consumer. I’m also including, for example, middle and high level managers as well as politicians, with a university degree.

  3. Pingback: infOpinions? :: Public Relations

  4. Much as I enjoy Constantin’s thought, I find his dissection of BlogAlerts a tad disingenuous when the PR industry churns out reams of self-selected ‘stuff’ in the guise of ‘surveys’ in the full and often met expectation the press will regurgitate the findings. It happens with boring regularity.

    It is technically possible to generalise from small samples according to Deloitte though I am uncomfortable with the logic that underpins their view. Isn’t there a wider problem? At this point in time, any sampling of bloggers is bound to be skewed. No-one has any real idea of the composition of the blogosphere – who’s doing what, how active the 80K per day that are said to being added really are etc etc.

    In some industries – particularly marketing and PR, there seems to be a lot of talking to each other – almost like public IM. This might suggest to some that we’re not looking at the blogosphere but blogocircles.

    Either way, BlogAlerts are entitled to draw reasonable conclusions – which is what they appear to have done.

  5. Dennis, thank you for commenting.

    I don’t see why my “dissection” is “a tad disingenious.” I’m not responsible for the flawed “research” produced by the PR industry, and I expect that journalists will refuse to “regurgitate” that crap. If all we can do is to say “well, that’s how the world works, the PR guys pretend to do research, the journalists pretend to do reporting”, then we can’t expect to get better research or better reports. Of course, this doesn’t affect only mainstream media; most weblogs are happy to report any “study” or “survey” without questioning their validity – that is, if the results are fitting what “we” think about bloggers.

    You say that “it is technically possible to generalise from small samples according to Deloitte” — could you please point me to your source (book, study, article)?

    Of course that establishing a sampling frame for bloggers is one of the big problems; then we have all the other problems related to online surveys. I’m not saying that there is an easy answer to these problems – but this doesn’t mean that all one can do is to put out a survey, collect the results, and -hey- that’s it, take it or leave it, ’cause nobody knows how to sample bloggers.

    Of course Blog Relations – not BlogAlerts :) – are entitled to draw reasonable conclusions, if by reasonable we understand “limited to the PR pros who participated to the survey.”

    Again, my problem was not with the way they reported the results on their blog, but with their lack of reaction to (what I thought to be) inaccurate reporting of their findings.

  6. Apologies for getting the name wrong – very early here. We can argue the merits of statistical analysis until the cows come home. But if you stand back and take a practical, harsh look, you can argue all studies, polls, samples, research, call it what you will – are – ultimately tainted in some way or other. That’s a philosophical viewpoint for another discussion.

    Most of what I have seen over the many, many years that surveys have passed my desk tells me there is (almost) always an element of self-selection, espeically where it relates to matters of influence. It has to because it requires bias to support hypotheses.

    Provided everyone’s clear on that then generally, I see no harm in using the data. Although one always has at least one eye on where the data might be skewed. This is especially important when using it for your own purposes.

    For instance, we are considering a campaign and are testing the likelihood that people would wish to be part of the project. We’ve had many hits to the landing page in question. We’ve seen a lot of cross-reading between linked sites – presumably to view additional insights. A few comments have been posted on all pages we know have been read. Most of which are saying ‘yes’ to the idea. The comment rate is 0.75% of those that landed on pages in question after excluding any attempted spam hits.

    What can we assume from that? Anything? We have now added in a few topic heavyweights who we know will add stories and comment. They are passionate about the topic, are at the coal face of execution and are known by both sides of our target audience. Does that mean the people we’d like to influence will read and act up on the message? Especially as we want them to interact on what is a tough area.

    The honest answer is I don’t know. The one thing I can say is that having a large number of page reads tells me at least that number are interested in something that’s on the page. That’s more than could ever be calculated for print media. I can also say how the reading numbers for that page compare to other newly minted material on the landing site. So now I have some idea of relative significance. Do those numbers help us make a decision? In borad brush terms, yes they do and they continue to do so as the conversation expands.

    What I do believe however, and here I’m sure we’re agreed, it is in the narrative the greatest learning occurs – even in self-selected cases. So if our project IS successful, how will that be judged? The learning from the conversation? The vast number of hits and comments? It’s Long-Tail appeal? No. It will be the actions taken by those we wish to influence.

    In the end, we can say as much as we like about what people do or do not know. We can make as many assumptions as we like. We can kid ourselves, if that is indeed what we’re doing. But it is in the action and impact that all things are ultimately measured.

    I’m not convinced at this point in time, blogging is anywhere near the maturity it needs to reach for it to have serious influence except in a few highly defined areas. I make no assumptions. It’s a gut feel. It’s certainly at a point where if I hear about one more Blogging-101 for business conference I’ll probably scream.

    Survey or not, I believe the value of this medium will come from those that find smart ways to use it and that means taking risks that for many will appear to be flying in the face of conventional wisdom. That’s what we’re doing right now in relation to our project. We have an eye on stats but we know it will need the very best content for it to have impact.

    That will come from experts at the coal face, industry people steeped in managing the issue and journalists who have built up expertise in understanding the industry concerned and who can cross the communications divide between one side and another. If it fails, we’ll have learned something. If it succeeds, we’ll have learned something different.
    I doubt we’ll think twice about the stats.

  7. Pingback: infOpinions? :: Public Relations » Blog Archive » PR Blog Surveys Abound :: As of yet, only one makes the grade

Comments are closed.