Planet Qual should really keep an eye open for the amount of so-called qualitative analysis happening on the dark side. Sorry, that should read the quantitative side. And it was the internet wot done it.
Now that much survey work has moved online, the quantitative brethren are rejoicing that all those pesky data capture costs have been eliminated. And the focus is switching to analysis. Not only of pre-coded questions. But the vast quantities of open-ended questions which respondents are happy to type hundreds of words of response to.
Are these analysts making a rod for their backs? Of course, but the great thing about quant is that they really understand sampling. So it isn't that difficult to take samples of those open-ended questions and start to generate findings which are a lot more qualitative and look a lot less quantitative than counting beans.
Now, quallies can huff and puff all they want about issues of respondent identity, and loss of information in moving from face to face to a keyboard generated format. From the clients' perspective, however, this added depth comes with all the confidence of big numbers from a survey budget that they have already paid for. The cost of qualitative data via the Internet is in freefall. It is going to become virtually free.
Mixing number and words
For me, the really interesting aspect is when quantitative agencies begin to apply mathematical techniques to qualitative analysis. They'll have to. Because the medium that allows them to collect screeds of text also allows them to capture images, and audio sent to them by the widely touted research 2.0 audiences. Not to mention webcam streaming and more complex multimedia objects, such as respondents' own blogs and websites.
Sampling can only take you so far. So you can expect keyword searching to see within the primary data which words and phrases are most frequent. And which occur close to other words and phrases. Soon you may be able to check the frequency of a given phrase's occurrences against a bank of research in the same marketplace or region. Much as classical scholars can conduct a lexical study of a given word in a particular text in seconds - by comparing the use of that word in every other known document in the ancient world.
You can expect mechanised recognition techniques. My image editing software already has a function which automatically tags faces when loading hundreds of images. Is it beyond the bounds of possibility that mechanical recognition and classification of images will spread to logos and packaging?
This is going to be a real threat. Look at the challenge that qualitative research has faced in recent years from a similar expansion of types and volume of data. Bricolage is a marvellous word. It enables you to control the datasets you choose to analyse, but one of its hidden aspects is the requirement to manage how much data you generate in the first place.
Ask each respondent for four groups to run off a roll of (digital) film and bring the photos to the groups and you'll generate nearly a thousand images. I bet most of those don't get included in the debrief presentation.
Head-sized research
We haven't taken nearly seriously enough the constraint that so much qualitative research is head-sized: it can only be carried in the heads of one or at most two researchers. What can't be remembered and arranged within a head gets lost. Even with the most sophisticated analysis grids in the world.
So we're going to have to make a choice: we either stick to head-sized research or engage in a little expansion activity of our own. The safer response is to begin to use quantitative techniques to extend the capacity and effectiveness of our analysis.
We have to start to measure probable occurrences, to relearn how to identify outliers and, maybe, to do a little judicious sampling ourselves. We already are - we're just not admitting to it.
Given time this ought to allow us to be able to work with larger datasets. And it may even give us the confidence to start to do qualitative-style analysis on quantitative data - to use the numbers creatively and impressionistically to begin to fill gaps. It really isn't all bad news. But if we don't, qualitative research risks retreating into a specialist niche or becoming a haven for small clients without the foresight to build their own online panels - as increasing numbers of our clients undoubtedly are.
I've just analysed a client's website which is almost devoid of brand-related messaging. A short trawl of its distributors' websites shows that the vast majority of messages about this particular brand are coming from them. Eight distributors have paid for a Google listing featuring the brand name as a key word. The number of other sites also stocking the brand runs into hundreds. I have no choice but to use some kind of extended framework to analyse this. Yet it will be a qualitative solution.
Fuzzy logic
I'm also about to put together a discussion guide for a group discussion architecture based on Bayesian decision theory after the Rev Thomas Bayes, the 18th century mathematician. He set out to prove divine providence but was appalled to find he had invented what we now call fuzzy logic - moving towards certainty using partial beliefs. It's a quantitative technique, which I'll be using qualitatively.
Keep your ears peeled as the qualitative and quantitative tectonic plates grind against each other. They're on the move again. But I'm not ready to issue earthquake warnings. Not yet.