Summary: Why the articles published by The Guardian and The Times on the Blog Relations PR Survey are inaccurate (IMO); what are the journalistic standards for reporting a survey; a response to the comments of Hugh Fraser, founder of Blog Relation; why is important to discuss the limitations of blog surveys and how they are reported.
More: Blog Relations –a “content consultancy based in London”– has published the results of its PR Survey on September 26. I wrote (on September 27) about the fact that The Guardian (free registration required), reporting on the survey, missed that:
- the survey is based on a nonprobabilistic sample, so extrapolating the results to PR pros, other than the ones who took the survey, is a risky business (to say the least)
- the number of people who took the survey was very small – only 50.
The result was that the article was misleading, in my opinion, because the percents reported did not apply to PR pros in general, but only to those interviewed.
Of course, that’s not the fault of the survey’s authors. They didn’t had the intention of using a representative sample of PR pros; moreover, they stated clearly the number of responders when they published the results, and any journalist reading them would conclude that the survey is not based on random sampling, which is the norm for public opinions polls.
So, I said,
Kudos to Blog Relations for publishing the survey.
Kudos to the Guardian for writing about weblogs as a business communications tool; I hope next time they’re going to be more accurate about reporting stats.
On September 29 Hugh Fraser, one of the two founders of Blog Relations, wrote about how rapidly the word about the survey spread throughout the blogosphere and the mainstream media. He noted, among other things:
“By the time I had my breakfast, The Media Guardian was on the phone. I had a chat with their reporter, Dominic Timms, and he turned around a news feature article with accurate quotes and information by lunchtime. (emphasis added)
[…] At some stage, the newspaper with the longest history in the world, The Times had picked up the story. They didn’t mention the source of the survey, which was a bit odd, but it was a nice surprise all the same.”
Here’s what Holden Frith wrote in The Times Online about the survey:
“Businesses have been slow to respond to the threats posed by weblogs and equally slow to capitalise on the opportunities they present, according to the results of a survey of public relations consultants released today.
Nearly 60 per cent of respondents said that companies have not yet woken up to the risks, and 64 per cent said that a disgruntled employee or customer could cause significant damage to a firm’s reputation by posting damaging remarks on blogs – the online message boards and diaries which have become so popular in the past year.
A number of companies, including Ryanair and Land Rover, have been the subject of sustained, negative blogging campaigns, which have attracted the attention of the mainstream media.
It’s not all bad news, though. More than 80 per cent of respondents thought that either “quite a few” or “many” companies could benefit from the trend for blogging. They suggested that blogs provide customers with alternative means of communicating with customers and learning about their requirements.”
Again, no mention that it’s not a survey based on probabilistic sampling, and no word about the small number of respondents.
Sure, both journalists –from The Guardian and The Times– were careful to write about the “60% of PR executives interviewed” or the “80 per cent of respondents”, but this doesn’t put the numbers in the right perspective: there were only 50 respondents, and they were self-selected. This is what is missing from the two articles.
Again – no reproach for the survey’s authors; they were never contacted about the numbers included in The Times. As for how “accurate” was the information included in The Guardian, we’ll address that later.
At this point, I started to ask myself if I have a good understanding on how journalists are supposed to report polls and surveys. I couldn’t find the guidelines for The Guardian or The Times, but I found other materials that are pretty clear about what the standards are (I’m going to quote only the paragraphs relevant for this discussion):
(PDF) The AP stylebook: polls and surveys (Updated April 2002):
Stories based on public opinion polls must include the basic information for an intelligent evaluation of the results. Such stories must be carefully worded to avoid exaggerating the meaning of poll results. Information that should be in every story based on a poll includes the answers to these questions:
2. How many people were interviewed? How were they selected? (Only a poll based on a scientific, random sample of a population – in which every member of the population has a known probability of inclusion – can be used as a reliable and accurate measure of that population’s opinions. Polls based on submissions to Web sites or calls to 900 numbers may be good entertainment but have no validity. They should be avoided because the opinions come from people who select themselves to participate. If such unscientific pseudo-polls are reported for entertainment value, they must never be portrayed as accurately reflecting public opinion and their failings must be highlighted.)
3. Who was interviewed? (A valid poll reflects only the opinions of the population that was sampled. A poll of business executives can only represent the views of business executives, not of all adults. Surveys conducted via the Internet — even if attempted in a random manner, not based on self-selection — face special sampling difficulties that limit how the results may be generalized, even to the population of Internet users. …)
6. What are the sampling error margins for the poll and for subgroups mentioned in the story?
BBC Editorial Guidelines – Section 10 – Politics & Public Policy:
When we report polls which do not reveal voting intentions we should always give the name of the polling organisation, the sample size, the nature of the sample and as much information about the margin of error and fieldwork dates as feasible.
We must conduct surveys, such as those of small specific groups like MPs or health authorities with care and must never report them as polls.
We must not mislead our audience about the status of the information. The remit of a survey should not be translated into percentages but reported in straight numbers.
OK, we’re not talking about a political opinion poll, but it’s still a poll; it’s not MPs, but we have a small numbers of respondents; and it’s not a BBC or AP article – but it’s still journalism.
Furthermore, it’s an interesting situation: the founders of Blog Relations, the authors of the survey, are journalists. They were in the situation of having the results of their (non-journalistic) research published in the media — not accurately, I’d say. But how do they feel about it, I asked them in a comment published on their blog:
Hugh, congratulations for the Blog Relations survey’s media coverage. I was wondering — are you OK with the fact that both The Guardian and The Times didn’t report about the limitations of your survey? Aren’t journalists supposed to indicate when the results of a survey can’t be projected outside the small sample of respondents? Shouldn’t they report the -small- number of respondents?
Here’s what Hugh responded:
“The Guardian wrote as follows: “the online survey, which the authors admit attracted responses from more blog-savvy professionals???. I think that describes the general character of our sample well (although it did include some PRs who did not know much about blogs).
I’ve already written that it was odd that The Times did not quote the source, so that anyone who was particularly interested could look it up and find out the details.
We will see bigger survey numbers from Edelman and MIT who are sampling the entire blogosphere – 17 million bloggers by some estimates. We limited our survey to one profession. I don’t know how many PRs there are in the world, but thankfully it isn’t anything like that number.
Each of the 50 PRs who took our survey was registered with us. We have verified their email addresses and know who they are. Some of them are quite well known in the profession. 22 left left on-the record comments which we have quoted by name (and some of which The Guardian picked up). Still more left URLS, which we have listed. This means that it is a very transparent and high quality sample – not just anybody passing by a website who had a few minutes to spare.
So the short answer to your question is no, I’m not losing any sleep over it. I won’t be ringing up The Guardian to complain that they reported our survey.”
In another posting, and without linking to my comment, Hugh takes a shot at my questions:
“we endured some sniping from a couple of PRs with blogs about the fact that only 50 of their profession filled in our survey. One of them even seemed to think that we should complain to the media about the coverage we received, because they didn’t mention the “shortcomings” – which coming from a PR is an original idea.”
Well, Hugh, you got it wrong.
I didn’t asked you to call The Guardian or The Times and complain that they didn’t manage to report your survey properly. Who knows, maybe it was something in the articles about the survey’s limitations (not “shortcomings”), but it was cut by the editors; maybe it wasn’t the reporters’ fault.
What I asked was how do you feel about the way your findings were reported. You‘re a journalist, you know what the standards are — tell me, were they respected in this particular case? I was hoping that you’ll be at least a little bit worried that the readers might get the wrong ideas from the articles quoting your survey.
But no — as you said, you’re “not losing any sleep over it.” Oh, well.
Now, let me try to respond to your comments.
The Guardian’s notice of your admission that the survey “attracted responses from more blog-savvy professionals” describes, indeed, the general character of the sample, but it doesn’t say anything about the limitations of your survey: nonprobabilistic sample, small number of respondents, no way to generalize the results.
You say that the survey had “a very transparent and high quality sample – not just anybody passing by a website who had a few minutes to spare”. That’s true; you were lucky. Still, it happened by chance, not because you had a method of selecting the participants.
“We limited our survey to one profession. I don’t know how many PRs there are in the world.”
No — you limited your survey to 50 PRs.
“We will see bigger survey numbers from Edelman and MIT who are sampling the entire blogosphere – 17 million bloggers by some estimates.”
Bigger numbers are not, by themselves, a guarantee for significant results. We’ll have to see how Edelman/Technorati are framing the analysis of their survey. The initial announcement is not encouraging, but I still hope they’re going to publish a serious discussion of their methodology.
One last bit, Hugh; you say:
Thankfully, there are a lot fewer PRs than there are bloggers, so I think that puts our survey on a par with Edelman/Technorati from the point of view of statistical significance.
Well, it would be sad to be true, if by “statistical significance” you understand that the survey’s results can be used as a reliable and accurate measure of bloggers’ (for your survey: PR pros’) opinions (to quote the AP stylebook).
Don’t get me wrong: I’m not saying that your survey is not valuable; it definitely has interesting findings. I agree that it’s difficult to do sound quantitative research, and that not all research has to be quantitative — one can get incredible insights from qualitative research. But that’s another story.
When people use terms like “survey” (a term that still carries a lot of weight), especially in the media, the assumption is that we’re talking about “scientific” surveys, based on probabilistic sampling. When that’s not the case, it has to be said loud and clear.
A good number of surveys -as the one authored by Blog Relations- are used by people who are trying to convince their bosses and peers to get aboard the Cluetrain, and they need sound research. If we’re not discussing about how these surveys are done, what are their limitations, how the results are reported, what’s the meaning of the results, and we’re just happy that we have some new “survey” or “study” that validates what we already know, then it’s not research — it’s folklore.