• Skip to main content
  • Skip to primary sidebar
Merrill_Logo_CMYK_tag
  • Home
  • About
    • Team
  • Services
    • Protecting Client Investments Through Research
    • Complete Range of Research Services
    • New Product Development
    • Custom Panel Development
    • Audiences
  • Work Examples
  • Clients
  • Blog
  • Ecosystem Partners
  • The Merrill Institute
  • Contact
×
  • Home
  • About
    • Team
  • Services
    • Protecting Client Investments Through Research
    • Complete Range of Research Services
    • New Product Development
    • Custom Panel Development
    • Audiences
  • Work Examples
  • Clients
  • Blog
  • Ecosystem Partners
  • The Merrill Institute
  • Contact

body language

AI-Based Fault Tolerant Qualitative Research

May 17, 2023 by Rich Stimbra

By David M. Schneer, Ph.D./CEO

4-Minute Read

Imagine if you could conduct an interview and observe when the respondent’s words do not match their body language.

Now, imagine that, at the end of the interview, you could feed the video into an AI-based software program that would also highlight verbal-physical inconsistencies.

What would that mean?

It would be like the notion of fault-tolerance in the computing sector. Webster defines the term as, “Relating to or being a computer or program with a self-contained backup system that allows continued operation when major components fail.” [1] Or, in finance parlance, the term “Belt and Braces” was used in early 19th Century Britain to describe an ultra-conservative approach to a deal. [2]

Image Credit: Mark Sardella

Of course, the term could apply to any system—even mechanical. But now we think the term can also stand next to qualitative research.

Well, you don’t have to imagine this scenario anymore. At Merrill Research, we have conducted the first fault-tolerant qualitative research study using a highly trained nonverbal intelligence expert (me) backed up by a highly tuned AI-based analytical tool (LightBulb AI).

And so, we put on our belts and suspenders for this study.

And what did we learn?

Of course, given that our research is custom I cannot offer specific details about the project, but this much I can tell you. I observed concrete instances where respondents’ words did not match their facial expressions and the AI-based software was tracking right along with me, second by second.  At the end of the interview, we had a complete longitudinal record of the respondent’s facial reactions complete with verbatim transcripts. As such, we were able to pinpoint which questions elicited the most engagement, joy, and surprise, or, on the contrary, contempt, disgust, fear, and anger. Neutral facial expressions were also measured.

This is how we did it.

Respondents in this study agreed to be digitally recorded, whether virtually or in person. We conducted 18, 60-minute in-depth interviews, six via Zoom video and 12 in-person interviews, six in a facility on the East Coast and six in a facility on the West Coast. These respondents were technical professionals. Respondents in this study agreed to be digitally recorded, whether virtually or in person. All video and in-person interviews were conducted by me. Professional videographers were hired to record in 4K video. In both venues, the videographers shot from the observation rooms to mitigate intrusion.

Once digitally recorded, the raw video was fed into an AI-based software program designed to detect the slightest emotion: joy (happiness), surprise, contempt, fear, anger, sadness, disgust and, neutral.

The software also derived an overall engagement measure.

While all these emotions were measured, some (fear, sadness, anger, and disgust) barely registered, if at all. This made sense, given the context of the conversation.

Would the respondents react emotionally like other consumers, or would they live up to their reputation as being linear, rational, and objective?

Spoiler alert. Our research proved undoubtedly that these individuals emoted just like the rest of us, although that did not come through in their verbal comments.

I, as well as the AI-based software, was looking for the seven universal human micro-expressions as defined below.

EmotionDefinition
Joy/HappinessAs measured by a symmetrical smile with crow’s feet engagement. Usually accompanied by surprise.
SurpriseAs measured by a quick rise in eyebrows, and eyelids with mouth agape. An indication that one has been caught off guard, either positively or negatively. Often accompanied by Joy/Happiness
ContemptAs measured by an asymmetrical smile. Those showing contempt may either feel negatively toward stimulus, or contempt can be a feeling of superiority.
DisgustAs measured by a “crinkled” nose and mouth agape. Intense dislike.
AngerAs measured by eyebrows together and down with lip compression.
FearAs measured by a quick but subtle rise in eyebrows and eyelids.
SadnessAs measured by the eyebrows moving up and in with a pout or pressed lips.
NeutralThe absence of the above-mentioned emotions.

Next, we combined these emotions into three main categories: engagement, likability, and dislike.

Engagement comprised focus or interest and was measured on a 100-point scale. Any time engagement dipped below 80%—this was cause for further evaluation.

Next, we combined happiness or surprise on a 100-point likability scale, and the higher the score the better. Any time emotion spiked above 40% it was considered for further review.

We measured dislike as defined by contempt, disgust, fear, and sadness. Here, the lower the score the better, and anytime dislike rose about 40%, it was considered for further evaluation.

And finally, much like me, the AI software was trained to identify contradictions between speech and expression.

This is what the output looked like (again, redacted for privacy):

So, what did we learn?

  1. For the most part, respondents exhibited a very high level of engagement throughout the interview. This made sense, given that the questions we asked were provocative.
  2. We observed a low level of dislikes (negative emotions). Again, this makes sense since our topics, while provocative, were not incendiary or controversial.
  3. On those questions where I noticed inconsistencies, the AI software corroborated my observations. We were able to conclude that indeed some of the answers we were getting were contradictory to what respondents’ emotions were showing.
  4. We matched minute-by-minute, longitudinal verbatim quotations with observed emotional states. This allowed us to pinpoint the cause of the emotion and add quotations with context.
  5. The AI-based software was far more accurate than me. I expected this because not only did I have to interpret the answers, I had to ask the questions. And probe. Not so for the AI software.
  6. The AI software was highly accurate in identifying contradictions.
  7. We did not see any so-called AI “hallucinations”, but we did see some aberrations. We registered false negatives not because of the software mistakes but rather study design. For example, we noticed considerable drops in engagement when respondents looked away from the camera.

We believe that this is the future of qualitative research—highly trained moderators backed up by AI makes for mighty powerful insights with confidence.

Need help with designing AI-based qualitative research? Let Merrill Research guide you through your next research study!



[1]             https://www.merriam-webster.com/dictionary/fault-tolerant

[2] https://www.phrases.org.uk/meanings/belt-and-braces.html

Filed Under: Body Language, David Schneer, Research Tagged With: body language, David Schneer, Market Research

Law & Order SVU Body Language Fact Check

April 26, 2023 by Rich Stimbra

By David M. Schneer, Ph.D./CEO

3-Minute Read

DUN-DUN!!

Oh man, once I hear that sound[1], I’m like Pavlov’s Dogs running to the couch. It’s Law & Order SVU time, and I’m a rabid fan. The rich and complicated characters are experiential. Like we sometimes do in our research studies, I did a quick word association for some of my favorite characters:

ICE-T? Sarcastic.

Elliot? Skeptical.

Detective Rollins? Captivating.

Lieutenant Benson? Balanced.

Barba? Pugnacious.

Dr. Huang? Uber-Psychiatrist.

I rarely miss an episode. So, I was surprised when the other day, I stumbled upon an episode I had not previously seen, SEASON 6, EPISODE 18–“PURE”, which originally aired on March 8, 2005.

Suddenly the plot turned to body language. I sat up. My Spidey senses were tingling.

This episode featured former SNL comedian, Martin Short, who plays a psychic, Sebastian Ballentine. The psychic claims to know what happened to a missing 18-year-old girl. Stabler smells a rat and thinks Ballantine is dropping clues because he’s the culprit. But Dr. Huang explains to Elliot that the psychic is likely reading facial cues to obtain his information.

Then a picture of the major facial areas from my FACS manual[2] flashed across the screen.

That got me off the couch.

After watching this episode, I decided to fact-check the research and writing staff at Law & Order. Did they get the FACS facts right?

Let’s see.

Below are the verbatim 10 lines (“italicized, bold faced and in quotes”) taken directly from the script of this episode. The scale? True, Mostly True, Mostly False, or False.

  1. “We give away volumes with our faces.” TRUE There are seven universal emotions that have very clear facial signs: anger, sadness, fear, surprise, disgust, contempt, and happiness. Neutral is the lack of any emotion.
  2. “There are 43 distinct facial movements.” MOSTLY TRUE but a cursory survey of the literature regarding the number of facial muscles varies by source. The Interviewer’s Guide sent along with the FACS manual mentions a 1977 study by Young and Decaries that resulted in “42 facial gestalts”,[3] but this refers to two different ways of measuring visible facial movements, the minimal units of behavior method or the list of possible facial gestalts approach. Dr. Ekman was interviewed in a 2003 New York Times article that referenced 43 facial muscles.[4] Interestingly, this article appeared two years before the episode aired, and it is conceivable that the Law & Order researchers/writers tapped into it. An article in the CBC references 43 distinct facial muscles.[5] Smithsonian Magazine mentions 42.[6] The HowStuffWorks website claims 43[7] The Medical University of Southern California hedged its bet and records “over 40 muscles.”[8] And finally, Atlanta Plastic Surgery weighs in with 43.”[9] So, why the discrepancies? One has to do with the process of quantifying facial muscles. Explains Duquesne’s Dr. Anne Burrows, an expert in the subject area, “The problem with quantifying facial musculature is that they’re not like other muscles. They’re fairly flat, difficult to separate from surrounding connective tissue, and they all attach to one another. They are very unlike muscles of the limbs, for example.”[10] The other issue? The number of facial muscles in humans can vary (cf. Waller et al., 2008). So, what is the actual number? I’m going with 43, and the Law & Order staff did their homework on a complicated subject.
  3. “A psychologist named Paul Ekman catalogued 3,000 possible combinations…making up the entire spectrum of human emotion.” MOSTLY TRUE. Technically, he identified 10,000 facial muscle movements but only 3,000 were related to emotion.
  4. “Why [was FACS invented]? To see if someone’s lying.” MOSTLY TRUE. Yes, FACS was created to help with deception detection, but the rubric is also used in the movie industry by computer animators to create characters with real emotions. Law enforcement also uses it.
  5. “FACS breaks facial movement down into action units.” TRUE. These are alphanumeric designations such as AU-1, AU-4, etc.
  6. “AU-1 is raising the frontalis par medialis. It’s the inner eyebrow. It’s a sign of distress.” TRUE The muscle behind the Inner Brow Raiser is the frontalis (pars medialis), which can be visible with the emotions of sadness, surprise, and fear.
  7. “FACS teaches you how to pick up on fleeting micro-expressions that most people don’t even see.” MOSTLY TRUE FACS also helps animators make characters come to life.
  8. “Turns you into a human lie detector.” MOSTLY FALSE Multitudes of studies collected by numerous behavioral scientists over the years have demonstrated conclusively that humans are poor lie detectors. No one single facial action unit is a reliable indicator of deceit. However, the astute observer will look for a cluster of micro-expressions surrounding a stimulus, a possible indicator of deception.
  9. “It’s being taught at the FBI and CIA.” TRUE FACS has been used by local police, the military, TSA, CIA, and the FBI.
  10. “The CD-ROM is sold over the Internet.” MOSTLY TRUE. Back when this episode aired it was available via CD-ROM. Today, you can obtain the FACS manual online (www.paulekman.com)

So, how did Law and Order SVU do? I’d give them an A. They took a complicated subject and boiled it down to its essence for a quick and accurate portrayal of FACS. It is good to watch TV get it right, at least from this perspective and confirms my training on the subject.

We teach micro-expression detection and look for them in our research studies. We can tell who is engaged or comfortable and who isn’t. For those who are not engaged or uncomfortable, we can intensify our efforts to draw them out with stimuli such as refreshments, new content, or a change in topic.

Contact us today to see how we can help you or your organization become proficient at finding out what people are really thinking when they communicate with you.

Most Communication is Nonverbal. Are You Fluent?

 

 

REFERENCES

[1] https://www.today.com/popculture/tv/law-order-dun-dun-sound-story-famous-noise-rcna17526

[2] FACS Investigator’s Guide, 1978. Paul Ekman, Wallace V. Friesen, Joseph C. Hager, Salt Lake City, UT p. 3.

[3] Young,G., & Decarie, .T G. An ethology-based catalogue of facial/vocalbehaviorsinfancy. Educational Testing Service, Princeton, N.)., 1974.

[4] https://www.nytimes.com/2003/08/05/health/conversation-with-paul-ekman-43-facial-muscles-that-reveal-even-most-fleeting.html

[5] https://www.cbc.ca/natureofthings/features/the-seven-universal-emotions-we-wear-on-our-face

[6] https://www.smithsonianmag.com/smart-news/human-faces-might-only-express-four-basic-emotions-180949598/

[7] https://science.howstuffworks.com/life/inside-the-mind/emotions/muscles-smile.htm

[8] https://muschealth.org/medical-services/ent/fprs/facial-paralysis

[9] https://www.atlplastic.com/blog/2018/07/11/how-facial-muscles-contribute-to-facial-wrinkles/

[10] https://www.sciencedaily.com/releases/2008/06/080616205044.htm

Filed Under: Body Language, David Schneer, Research Tagged With: body language, David Schneer, Market Research

Practical Applications for Nonverbal Intelligence and the Indications of Facial Half Framing.

April 12, 2023 by David Schneer

By David M. Schneer, Ph.D./CEO

3-Minute Read

I’m often asked how you can tell when someone is engaged in the conversation. Certainly, the head tilt is one. And a smile is another. But what if you’re talking with someone and suddenly, they frame the side of their face with their thumb and forefinger without any support? What does that mean?

Well, that’s actually a good sign. Facial half-framing, as I unofficially call it, is a sign of engagement, and positive sentiment.

This is what it looks like.

However, if the person rests their head in the web of their thumb and forefinger, that is a different story. That can mean tiredness, boredom, or disagreement.

We teach these indications and look for them in our research studies. We can tell who is engaged and who isn’t with body language signs like the facial half-frame. For those who are not engaged, we can intensify our efforts to draw them out with stimuli such as refreshments, new content, or a change in topic.

Contact us today to see how we can help you or your organization become proficient at finding out what people are really thinking when they communicate with you.

Most Communication is Nonverbal. Are You Fluent?

Filed Under: Body Language, David Schneer, Research Tagged With: body language, David Schneer, Market Research

Lean In (Literally)

March 28, 2023 by David Schneer

Practical Applications for Nonverbal Intelligence and the Indications of Leaning In.

By David M. Schneer, Ph.D./CEO

3-Minute Read

Anecdotally, I have heard from more than a few recruiters who tell me that some people typically lack two things during the interview process: eye contact and positive body positions.

Of course, maintaining good (direct) eye contact during the interview process is critical. But what about the body?

One of the easiest things you can do (and look for, if you are recruiting or interviewing) is leaning in. It has long been established that a person who leans in toward their subject is interested[1], and typically with positive engagement. Often, leaning in can signal curiosity. It looks like this.

So, if you are interviewing someone and they are consistently leaning forward that is a good sign and something you should look for.

And if you are a job candidate, one of the best things you can do is lean in. Literally. It tells the astute observer that you are interested and curious.

When I moderate focus groups or in-depth, in-person interviews I am looking for the participant(s) to be leaning in. If not, I need to understand why or change my tactic for better engagement. How can you do this? One way is to introduce a stimulus of some sort. That always perks people up.

Make sure your participants are comfortable and you’ll have a better chance of engagement.

Contact us today to see how we can help you or your organization become proficient at finding out what people are really thinking when they communicate with you.

Most Communication is Nonverbal. Are You Fluent?


[1] “Forward leaning may be interpreted as a signal of interest (Coker and Burgoon, 1987).”

— The Routledge Dictionary of Nonverbal Communication by David B. Givens, John White: https://a.co/d1sMJe6

Filed Under: Body Language, David Schneer Tagged With: body language, David Schneer

Square-Jawed (or not)?

July 31, 2022 by Rich Stimbra

By David M. Schneer, Ph.D./CEO

3-Minute Read

Practical Applications for Nonverbal Intelligence and the Emotional Indicators of Sudden Jaw Movements.

Most of us are familiar with the term draw-dropping surprise—the mouth agape with teeth showing. This behavior is a reliable indicator that someone has truly been caught off guard, has lost their way, or may even be terrified.[1]

But what about a jaw shift? What does it look like and what does it indicate? Suppose you’re talking to someone and suddenly it looks like they were punched in the side of their jaw, which suddenly moves sideways and then snaps back. Sometimes people will repetitively shift their jaws from side to side as soothing behavior designed to mitigate stress. But other times it has different implications. See below.

Former FBI profiler and body language expert, Joe Navarro, writes “Jaw displacement or repetitive jaw shifting (from side to side) is an effective pacifier. This is also simply a compulsive behavior in some people, so note when and how often it occurs and look for other confirming behaviors that something is amiss. Most people do this infrequently, and thus when you do see it, it is very accurate in communicating that something is bothering them.” [2]

A Telltale Sign of Stress

Sudden jaw movements are often very visible, even via video conferencing. When you observe it, do not ignore it. At Merrill Research, we see indications of sudden jaw movements in qualitative studies when someone is struggling to grasp a new product concept, for example. And when we see this behavior, it is always an opportunity to probe and learn more. If you see this behavior, then you might follow up with a question asking the person if they have any concerns or if there is any confusion. Dig deep enough, and you will find out that there is.

Contact us today to see how we can help you or your organization become proficient at finding out what people are really thinking when they communicate with you. Most Communication is Nonverbal. Are You Fluent?


[1] The Rutledge Dictionary of Nonverbal Communications, s.v., “Jaw Droop,” Routledge, 605 Third Avenue, New York New York 10158

[2] The Dictionary of Body Language, Joe Navarro: https://books.apple.com/us/book/the-dictionary-of-body-language/id1281489160.

Filed Under: Body Language, David Schneer, Research, The Merrill Institute Tagged With: body language

The Body Language of the Voice. Decoding the Sound of Silence

June 9, 2020 by Rich Stimbra

A Guide to Conducting Research in the Age of Covid-19: Part 8

By David M. Schneer, Ph.D./CEO

3.1 Minute Read

In our last blog “The Body Language of Fingers: Deciphering the Discourse of Digits”, we provided 10 tips on understanding the fingers. In previous blogs, we also covered reading the body language of facial micro expressions, the torso, the arms, hands and fingers—all gestures that can be seen on video conferences.

But what if you’re on an audio-only call? After all, circumstances may prevent some people from being in front of a camera (like driving) and others abhor being in front of one. Or, what if your video suddenly drops, and you’re left with only blank screens and voices? What can you learn from a person’s voice alone? Plenty.

As a body language master and qualitative moderator, I can tell you that in-person interviewing yields significantly more information than remote methods for those who can decipher the cues, but phone and video remain powerful alternatives for quickly collecting many types of data—especially now. The voice can speak without words, and if you listen for the following, you can discern a lot:

  1. Tone[1] of voice (timber of pitch, as in a “warm” or “soft” voice): Can indicate anger, frustration, surprise, or happiness, depending on context.
  2. Voice pitch[2] (high or low frequency of the voice): When a person’s voice becomes high-pitched, it is typically a sign of stress or, in the right context, deception.
  3. Audible sounds: Sounds such as gasps can signal fear, or exhaling can signal stress. This is why former FBI agent, profiler and body language expert Joe Navarro advises poker players to avoid wearing headsets[3]. This way, they can hear exhaling or gasps which could be an indication of a poor or marginal hand.
  4. Pregnant pauses: Can indicate uncertainty, discomfort, surprise, anger or deception, given the right circumstances.
  5. Uptalk: Ending sentences as though they are questions (a fav of valley girls) can signal lack of confidence, uncertainty or immaturity.
  6. Excessively loud voice: Indicates that someone is controlling, confident or agitated. By the way, people tend to tune out those with megaphone voices.
  7. Excessively soft voice: Indicates that someone may be fearful or shy.

 

Download our more in-depth comparison of the advantages and disadvantages of remote interviewing.

For additional information on COVID-19 visit the Centers for Disease Control (CDC) Coronavirus information page.

Most Communication is Nonverbal. ARE YOU FLUENT?

 

 

 

[1] Excerpt From: Joe Navarro. “The Dictionary of Body Language.” Apple Books. https://amzn.to/2NPrxlS 

[2] Ibid.,

[3] Navarro, Joe. 200 Poker Tells . Kindle Edition. https://www.amazon.com/gp/product/B004KZPK24/ 

Filed Under: News Tagged With: body language, David Schneer, Market Research, voice

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

© 2023 Merrill Research. All Rights Reserved.