Researchers Built an ‘Online Lie Detector.’ Honestly, That Could Be a Problem - WIRED

Researchers Built an ‘Online Lie Detector.’ Honestly, That Could Be a Problem - WIRED


Researchers Built an ‘Online Lie Detector.’ Honestly, That Could Be a Problem - WIRED

Posted: 21 Mar 2019 04:10 AM PDT

The internet is full of lies. That maxim has become an operating assumption for any remotely skeptical person interacting anywhere online, from Facebook and Twitter to phishing-plagued inboxes to spammy comment sections to online dating and disinformation-plagued media. Now one group of researchers has suggested the first hint of a solution: They claim to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But what they've actually demonstrated, according to a few machine learning academics, is the inherent danger of overblown machine learning claims.

In last month's issue of the journal Computers in Human Behavior, Florida State University and Stanford researchers proposed a system that uses automated algorithms to separate truths and lies, what they refer to as the first step toward "an online polygraph system—or a prototype detection system for computer-mediated deception when face-to-face interaction is not available." They say that in a series of experiments, they were able to train a machine learning model to separate liars and truth-tellers by watching a one-on-one conversation between two people typing online, while using only the content and speed of their typing—and none of the other physical clues that polygraph machines claim can sort lies from truth.

"We used a statistical modeling and machine learning approach to parse out the cues of conversations, and based on those cues we made different analyses" of whether participants were lying, says Shuyuan Ho, a professor at FSU's School of Information. "The results were amazingly promising, and that's the foundation of the online polygraph."

But when WIRED showed the study to a few academics and machine learning experts, they responded with deep skepticism. Not only does the study not necessarily serve as the basis of any kind of reliable truth-telling algorithm, it makes potentially dangerous claims: A text-based "online polygraph" that's faulty, they warn, could have far worse social and ethical implications if adopted than leaving those determinations up to human judgment.

"It's an eye-catching result. But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job," says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. "When people think the technology has these abilities, the implications are bigger than a study."

Real or Spiel

The Stanford/FSU study had 40 participants repeatedly play a game that the researchers called "Real or Spiel" via Google Hangouts. In the game, pairs of those individuals, with their real identities hidden, would answer questions from the other in a kind of roleplaying game. A participant would be told at the start of each game whether they were a "sinner" who lied in response to every question, or a "saint" who always told the truth. The researchers then took the resulting textual data, including the exact timing of each response, and used a portion of it as the training data for a machine learning model designed to sort sinners from saints, while using the rest of their data to test that model.

They found that by tuning their machine learning model, they could identify deceivers with as much as 82.5 percent accuracy. Humans who looked at the data, by contrast, barely performed better than guessing, according to Ho. The algorithm could spot liars based on cues like faster answers than truth-tellers, a greater display of "negative emotions," more signs of "anxiety" in their communications, a greater volume of words, and expressions of certainty like "always" and "never." Truth-tellers, by contrast, used more words of causal explanation like "because," as well as words of uncertainty, like "perhaps" and "guess."

"That's very different from the way people really speak in daily life."

Kate Crawford, AI Now Institute

The algorithm's resulting ability to outperform humans' innate lie detector might seem like a remarkable result. But the study's critics point out that it was achieved in a highly controlled, narrowly defined game—not the freewheeling world of practiced, motivated, less consistent, unpredictable liars in real world scenarios. "This is a bad study," says Cathy O'Neill, a data science consultant and author of the 2016 book Weapons of Math Destruction. "Telling people to lie in a study is a very different setup from having someone lie about something they've been lying about for months or years. Even if they can determine who's lying in a study, that has no bearing on whether they'd be able to determine if someone was a more studied liar."

Comments

Popular posts from this blog

Minisforum Venus NPB7 review - TechRadar

Amazon still sells clothes hook 'spy camera' used by man to watch underage girl - New York Post

Got a home alarm system? Don't put the company sticker in the window! - WFMYNews2.com