The Association for Qualitative Research
The Hub of Qualitative Thinking

Biometrics in the spotlight

What does Facial Recognition, which appears to be turning into something of a mainstream application for any forward-looking researcher, mean for qual?

Rightly or wrongly, Facial Recognition has been quoted in the media as a potential replacement to the oldfashioned focus group. So it feels appropriate that as qualitative practitioners we have a good understanding of what it can and cannot achieve and what it means for qualitative research in the future.

Mapping facial codes

Realeyes is a rapidly expanding company based in central London, set up by Oxford computer science graduates who claim to have taken biometric intelligence software to the next level. They have digitally mapped, to a greater degree of sensitivity than ever before, six of Ekman's universally-recognised facial codes; sadness, surprise, anger, disgust, fear and happiness. As soon as the viewer has watched a "video ad" being tested, the programme provides a colourcoded graph demonstrating where the different emotions peak and fall.

For certain types of campaigns, it is clear how this could be very useful. For example, when researching something humorous, it is important to bear in mind that it doesn't score too highly on "surprise" (which could also be interpreted as confusion) or "anger" when it hits the punch-line.

It is also very helpful in ensuring the communications have the right impact at the right moment in the ad. Even if positive emotions are lower at certain points, if it creates enough of an emotional lift at the finish, it could still mean it has achieved the desired affect and the all important likelihood to share or watch again will be increased.

Unsuspected sensitivity

Most cynicism centres around how accurate facial recognition really is. Obviously, when passively watching a screen, we very rarely make exaggerated facial expressions, and only very occasionally do we find something so amusing that we actually laugh out loud. Without going too deeply into the science, the software is more sensitive than you might expect.

It can register subtle movements that could easily be missed by the human eye, especially one not trained to watch and listen carefully. It's also not too sensitive to light and other external environmental factors that you typically might think would make it less effective.

Nonetheless, it can't give you accurate results based on a sample of a few, it is still not that sensitive or reliable; a sample of at least 50 is required for any sort of robust reading.

Achieving the numbers is not a problem, however, and can even be seen as an advantage of this tool, as it gives the scalability and sample sizes that are simply unachievable qualitatively. All that is required are large enough panels with participants that have webcams and a decent Internet connection, basically the majority, if not all of the countries, where we regularly conduct research. Not only that, but the technology also works on smart phones and tablets making it very accessible to use.

Scalability also doesn't mean longer turnaround times, it is possible to watch the data being collated with only a 15-minute delay and reports of final results can be available within 24 hours after collection. To make this sound even more appealing, it is also very affordable… Just think, for the cost of an additional focus group (per market) it is possible to collate responses of over 200 participants.

No Holy Grail

However, unsurprisingly this isn't the Holy Grail to all our research needs and obviously has its limitations. Results can be matched back to profile panel data such as gender, age, social economic class, and location but importantly this can't provide the entire context we need to judge a campaign accurately. It can't tell us the viewers" relationship with the sector or the brand in question and we still need to gain an understanding of what rational messages viewers take away from the ad.

We also have to remember that although it is good, in no way does it respond to all the subtleties of human behaviour and body language, of which the face is only one part, and inevitably, this also works best on finished ads rather than a quickly mocked up animatic.

Overall, this clearly has most potential to enrich quantitative results exponentially. As an addition to an ad tracker or a link test this can add another layer of richness, as well as some sense of the emotional impact, that before was arguably only really accessible qualitatively, but in terms of replacing in-depth qualitative analysis it is far too crude.

That said, it isn't something we should ignore and is definitely something that can also add weight to our qualitative analysis. How often have we sat in research groups and listened to respondents accurately play back the communications objectives, yet still looking decidedly unengaged?

We all know that, back in the real world where they won't be forced to listen attentively for £50 and free beer and crisps, they will be already fast forwarding or making that all important cup of tea rather than digesting our brand messages.

Powerful tool

On the other hand, every now and again, a really entertaining and creative idea comes along which leaves people animated and actively discussing it, yet could potentially be too controversial for a brand to progress with, and may well not pass a standard link test.

In these situations how much more powerful would a tool like this be than a few short films of viewers watching and responding to the ad and a couple of quotes? In the real world, however, we know that sadly very few are as expressive as we really need them to be to make a strong visual impact, in a way that positive data recognition result could demonstrate.

Digital Recognition alone is not a game changer, but as an additional tool to what we currently do well, it is something that for certain campaigns could help ensure that really good advertising doesn't get lost somewhere between creation and the board room.

 

Helen Rider
Copyright © Association for Qualitative Research, 2013