Ever had that déjà vu moment in a group discussion, when you look at one of your research participants and think, “hang on a minute, I’ve seen you before?” Yes? Then you’re not alone. As you might expect from a qualitative researcher, my question to you is: “How does that make you feel?” Now, be honest. Perhaps you feel reassured that they’ll know the ropes and be ‘easy’ to moderate. Or maybe it concerns you that they are too ‘professional’ to be considered truly representative?

The issue is not straightforward. Which is why research technology company Liveminds recently commissioned SketchBook Consulting to survey 100 qualitative researchers in its Research on Recruitment 2017 study. The results are somewhat worrying. First, the study found that most (58%) of qualitative researchers consider ‘professional participants’ a problem (with 25% unsure). Some extreme examples were shared, (like the bald man in the shampoo group), which raise the question of how often participants are who they say they are.

Despite the promises of well-intentioned recruiters to screen out those honest enough to admit they have done research in the last six months, more than half of the researchers said they had seen the same participants across different studies in the last year. What are the odds? That does imply too few people are doing too much research. And at a deeper level, it questions whether the traditional approach, via recruiters’ relatively small databases, is the best way to find the right people, in the digitally-connected era of social media?

The biggest problem with professional participants is their overfamiliarity with the research process (76% agree this is an issue). Many also believe they are more likely to lie in groups (64%) and to the recruiter (73%). This problem could be self-perpetuating, as someone who is very keen to take part in research will be more likely and more able to say whatever they need to, to qualify. But the issue of professional participants is also about the impact that prior experience has on subsequent behaviour. One researcher said that a participant even interrupted them mid-flow by asking “shouldn’t we do a personification exercise?” Which was badly received by both researcher and client.

The way repeat-participants have been moderated in the past and their familiarity with the types of questions being asked can influence what they say in subsequent groups. I spoke to Behavioural Psychologist Patrick Fagan who told me: “We are social animals who learn implicitly without knowing it. Whether your answers were well received in a previous conversation will influence how you answer in future ones. If you are familiar with a pattern of questions your brain tackles them differently.” In the case of creative or strategic development, if we have lots of experienced, ad literate participants who are expertly decoding the work we show them, does that compromise our conclusions on how that advertising would work ‘in the real world’?

How can we tackle the issue of professional participants? Currently there is that self-imposed best practice rule that potential participants are asked when they last did research. If the answer is more recently than during the last six months, then they should, in theory, be rejected. Most researchers seem to define professional participants based on that frequency of participation. Is ‘the last six months’ too frequent, though? And how can we be sure it is really being enforced, when there is so much anecdotal (and now quantitative) evidence of the same participants being on multiple recruiter databases? Their presence is undetectable, because data protection rules prevent any cross referencing to see whether the same people exist across multiple databases.

If you could guarantee you could reduce the risk of professional participants, would you want to? On several projects I’ve been using Behavioural Recruitment (through Liveminds) which addresses the issue by using Facebook’s unparalleled data on what people have done, rather than what they say they’ve done. They are only invited to begin the process of recruitment if they’ve demonstrated behaviour, interests and demographics that match the required specifications for each project. The greater reach afforded by access to Facebook’s 2.2 billion users around the world also means that participants are typically fresh to research, so you get the views of consumers new to research, rather than conditioned participants.

When using Behavioural Recruitment, we’ve have been consistently impressed by the quality of participants. Liveminds (who also have online qualitative research software) compared the word count in online qual projects that it recruited using Behavioural Recruitment against those recruited using traditional recruitment methods. It found that projects recruited by Behavioural Recruitment generated 47% more words on average.

For a low-cost carrier client it helped us find potential customers who we knew lived near the relevant airports by geo-targeted Facebook ads. For The Box Plus Network (the UK’s biggest broadcaster of music video) we found 16-24 year olds passionate about their niche channels (like 4Music and Kerrang) based on Facebook’s records of the videos they’d viewed and the music channels they engaged with. Similarly, for Sony, we found people in Germany and Poland who were passionate about design media, based on articles and videos they had watched.

It’s 2018. If authenticity is what matters most in recruitment, surely ‘fresh is best’? It is time for recruitment to take advantage of adtargeting via social platforms to find fresher, more representative qualitative research participants.