Researchers Aim to Spot a Liar Using Online Polygraph - Insurance Journal
Researchers Aim to Spot a Liar Using Online Polygraph - Insurance Journal |
Researchers Aim to Spot a Liar Using Online Polygraph - Insurance Journal Posted: 19 Mar 2019 12:00 AM PDT Can you spot a liar? It's tricky enough in face-to-face conversations that offer facial expressions, gestures and tone of voice because those physical cues add context. Spotting a liar gets even tougher in blind computer conversations. Florida State University researcher Shuyuan Ho wants to shed those blinders by creating a revolutionary online polygraph. "The future of my research is an online polygraph that could be used many different ways," said Ho, an associate professor in the College of Communication and Information. "You could use it for online dating, Facebook, Twitter — the applications are endless. I think the future is unlimited for an online polygraph system." Ho envisions a future where technology can identify liars and truth-tellers based on the words they write in electronic messages. Her latest research dove into the murky depths of internet deception where trolling, identity theft and phishing for credit card numbers snag an increasing number of online users. The research study, published in the journal Computers in Human Behavior, detailed the findings of an online game that she created to measure truthful and deceptive communications between two people. Ho parsed the words in those conversations, hoping to extract context from millions of bits of data in many messages — described as language-action cues — just as people get context from seeing physical cues that indicate whether someone is telling the truth or lying. She says the results were startling. The experiments revealed a person could spot lies in messages about 50 percent of the time, while a machine-learning approach could identify deception with an accuracy rate ranging from 85 to 100 percent. Ho is excited about the potential for this research. "I want to get the world's attention on this research so we can hopefully make it into a commercial product that could be attached to all kinds of online social forums," Ho said. "This basic research offers great potential to develop an online polygraph system that helps protect our online communication." Ho supervises the iSensor Lab on FSU's campus where researchers conduct experiments to better understand deception in online communications. To facilitate those conversations, she created an online game designed to identify language cues that unmask deceivers and truth-tellers. The game randomly assigned players to play the roles of "The Saint" and "The Sinner." As sinners and saints interacted via computers, researchers in the iSensor Lab captured those conversations and used machine-learning technology to scrutinize patterns of words and writing. Some fascinating language tendencies emerged from that analysis. The lying sinners were found to be less expressive, but they used more decorative words per message. They displayed more negative emotions and appeared more anxious when they communicated with truth-tellers. Deceivers also took less time to respond and used more words of insight, such as "think" and "know," and they tended to use more words of certainty, including "always" or "never." Conversely, truth-tellers used more words of speculation, such as "perhaps" and "guess," and they took longer to respond to inquiries. These saints provided more reasoned ideas by using words of causation — "because" — and they expressed more reflective thinking with words like "should" and "could." "Truth-tellers tend to say 'no' a lot. Why?" Ho asked. "They like to provide emphasis when they explain their reasons. If you ask them, 'Is this true?' They tend to say 'no' because there is another true reason." Researchers also calculated time lags between every sentence, and even parts of a sentence, by placing time stamps on the words. That precise breakdown clearly showed how much a person paused during interactions — another language-action cue. Those pauses might have been so slight that they would not necessarily be noticeable to a person, but machine-learning technology could spot it. Ho said she hopes her research will eventually give people better protection when they're online. "I think we all have good common sense about the people we meet face to face, but how much common sense do we have with the strangers we encounter online where you can meet a lot of people very fast," Ho said. "This research is so important because it can provide another reference point offering more protection. All of society can benefit." This research was funded by the National Science Foundation, Florida Center for Cybersecurity Collaborative Seed Grant and Florida State University Council for Research and Creativity Planning Grant. Source: Florida State University Was this article valuable? Here are more articles you may enjoy. |
Researchers Built an ‘Online Lie Detector.’ Honestly, That Could Be a Problem - WIRED Posted: 21 Mar 2019 12:00 AM PDT The internet is full of lies. That maxim has become an operating assumption for any remotely skeptical person interacting anywhere online, from Facebook and Twitter to phishing-plagued inboxes to spammy comment sections to online dating and disinformation-plagued media. Now one group of researchers has suggested the first hint of a solution: They claim to have built a prototype for an "online polygraph" that uses machine learning to detect deception from text alone. But what they've actually demonstrated, according to a few machine learning academics, is the inherent danger of overblown machine learning claims. In last month's issue of the journal Computers in Human Behavior, Florida State University and Stanford researchers proposed a system that uses automated algorithms to separate truths and lies, what they refer to as the first step toward "an online polygraph system—or a prototype detection system for computer-mediated deception when face-to-face interaction is not available." They say that in a series of experiments, they were able to train a machine learning model to separate liars and truth-tellers by watching a one-on-one conversation between two people typing online, while using only the content and speed of their typing—and none of the other physical clues that polygraph machines claim can sort lies from truth. "We used a statistical modeling and machine learning approach to parse out the cues of conversations, and based on those cues we made different analyses" of whether participants were lying, says Shuyuan Ho, a professor at FSU's School of Information. "The results were amazingly promising, and that's the foundation of the online polygraph." But when WIRED showed the study to a few academics and machine learning experts, they responded with deep skepticism. Not only does the study not necessarily serve as the basis of any kind of reliable truth-telling algorithm, it makes potentially dangerous claims: A text-based "online polygraph" that's faulty, they warn, could have far worse social and ethical implications if adopted than leaving those determinations up to human judgment. "It's an eye-catching result. But when we're dealing with humans, we have to be extra careful, especially when the implications of whether someone's lying could lead to conviction, censorship, the loss of a job," says Jevin West, a professor at the Information School at the University of Washington and a noted critic of machine learning hype. "When people think the technology has these abilities, the implications are bigger than a study." Real or SpielThe Stanford/FSU study had 40 participants repeatedly play a game that the researchers called "Real or Spiel" via Google Hangouts. In the game, pairs of those individuals, with their real identities hidden, would answer questions from the other in a kind of roleplaying game. A participant would be told at the start of each game whether they were a "sinner" who lied in response to every question, or a "saint" who always told the truth. The researchers then took the resulting textual data, including the exact timing of each response, and used a portion of it as the training data for a machine learning model designed to sort sinners from saints, while using the rest of their data to test that model. They found that by tuning their machine learning model, they could identify deceivers with as much as 82.5 percent accuracy. Humans who looked at the data, by contrast, barely performed better than guessing, according to Ho. The algorithm could spot liars based on cues like faster answers than truth-tellers, a greater display of "negative emotions," more signs of "anxiety" in their communications, a greater volume of words, and expressions of certainty like "always" and "never." Truth-tellers, by contrast, used more words of causal explanation like "because," as well as words of uncertainty, like "perhaps" and "guess."
Kate Crawford, AI Now Institute The algorithm's resulting ability to outperform humans' innate lie detector might seem like a remarkable result. But the study's critics point out that it was achieved in a highly controlled, narrowly defined game—not the freewheeling world of practiced, motivated, less consistent, unpredictable liars in real world scenarios. "This is a bad study," says Cathy O'Neill, a data science consultant and author of the 2016 book Weapons of Math Destruction. "Telling people to lie in a study is a very different setup from having someone lie about something they've been lying about for months or years. Even if they can determine who's lying in a study, that has no bearing on whether they'd be able to determine if someone was a more studied liar." She compares the setup to telling people to be left-handed for the purposes of a study—their signatures would be very different from real-world left-handed people. "Most people can get pretty good at a lie if they care enough," O'Neill says. "The point is, the lab [scenario] is utterly artificial." FSU professor Ho counters critics that the study is merely a first step toward text-based lie detection, and that further studies would be needed before it could be applied. She points to caveats in the paper that clearly acknowledge the narrow context of its experiments. But even the claim that this could create a path toward a reliable online polygraph makes experts anxious. Frowning Criminals, Performing LiarsTwo different critics pointed to an analogous study they say captures the fallacy of making broad claims about machine learning's abilities based on a narrow test scenario. Chinese researchers in 2016 announced that they'd created a machine learning model that could detect criminality based merely on looking at someone's face. But that study was based on photos of convicted criminals that had been used as identification by police, while the non-convict photos in the same study were more likely to have been chosen by the person themselves or by their employer. The simple difference: The convicts were much less likely to be smiling. "They'd created a smile detector," the University of Washington's West says. In the lie detection study, there's almost certainly a similarly artificial difference in the study's groups that doesn't apply in the real world, says Kate Crawford, the co-founder of the AI Now Institute at New York University. Just as the criminality study was actually detecting smiles, the lie detection study is likely carrying out "performance detection," Crawford argues. "You're looking at linguistic patterns of people playing a game, and that's very different from the way people really speak in daily life," she says. In her interview with WIRED, FSU's Ho did acknowledge the artifice of the study. But in the same conversation, she also suggested that it could serve as a prototype for an online lie detector system that could be used in applications like online dating platforms, as an element in an intelligence agency polygraph test, or even by banks who are trying to assess the honesty of a person communicating with an automated chatbot. "If a bank implements it, they can very quickly know more about the person they're doing business with," she said. Crawford sees those suggestions as, at best, a continuation of an already problematic history of polygraph tests, which have been shown for years to have scientifically dubious results that are prone to both false positives and being gamed by trained test takers. Now, the FSU and Stanford researchers are reviving that faulty technology, but with even fewer data sources than a traditional polygraph test. "Sure, banks might want a really cheap way to make decision to give loans to or not," Crawford says. "But do we want to be invoking this kind of problematic history based on experiments that are themselves questionable in terms of their methodology?" The researchers may argue that their test is only a reference point, or that they're not recommending it be used for real-world decisions. But Crawford says they nonetheless don't seem to be truly weighing how a faulty lie detector could be applied—and its consequences. "They're not thinking through the full social implications," she says. "Realistically they need a lot more attention to the negative externalities of a tool like this." More Great WIRED Stories |
You are subscribed to email updates from "polygraph" - Google News. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
Comments
Post a Comment