Domestic abuse: Lie-detector tests planned for offenders - BBC News
Domestic abuse: Lie-detector tests planned for offenders - BBC News |
Domestic abuse: Lie-detector tests planned for offenders - BBC News Posted: 03 Mar 2020 12:00 AM PST Domestic violence offenders in England and Wales could face compulsory lie-detector tests when released from prison under proposed new laws. Those deemed at high risk of re-offending will be given regular polygraph tests to find out if they have breached release conditions. The long-awaited Domestic Violence Bill will also specify that controlling a victim's finances can count as abuse. Alleged abusers will also be banned from cross-examining victims in court. Lie-detector tests - which work by measuring changes in heart rate, blood pressure, respiratory rate and sweat - are not 100% accurate. But the Home Office said it was already using the tests to monitor high-risk sex offenders and had found them to be 89% accurate. The government also plans to use lie-detector tests on convicted terrorists freed under licence. If the Domestic Abuse Bill passes, a three-year pilot will be carried out on domestic abusers which are deemed at high-risk of causing serious harm. If successful, the scheme will be rolled out nationwide. Slow progressAround 300 offenders will take a lie detector test three months after their release and every six months after that, according to the Home Office. Those who fail the test will not be returned to prison - but they may be jailed if they refuse to take the test or attempt to "trick" it, the Home Office added. They can also be returned to prison if the tests show "their risk has escalated to level whereby they can no longer be safely managed in the community". Information gathered from failed lie-detector tests is routinely shared with the police who use it to carry out further investigations. Campaigners say action to help the nearly two million victims of domestic abuse in the UK each year, two thirds of whom are women, is long overdue. The Conservatives first proposed tougher measures in their 2017 election manifesto but legislative progress has been slow. The Domestic Abuse Bill was among several proposed laws which fell by the wayside last autumn after Boris Johnson suspended Parliament and MPs subsequently voted for an early general election. The government is now bringing back the legislation, saying MPs will be presented with an "enhanced" package of measures that will "protect victims and punish perpetrators" of this "horrendous" crime. 'Hidden victims'There will also be a ban on perpetrators cross-examining their victims during family court proceedings and a legal duty on councils to find safe accommodation for domestic abuse victims and their children. Charity Women's Aid said this could be a "life-saving" move, but only if it was accompanied by guaranteed funding for specialist women's services - including for "marginalised" groups in society, which it estimates will cost about £173m a year. While welcoming many of the initiatives, children's charities warned that some families with children risked "falling through the cracks in support". "The bill risks dividing victims into 'haves and have nots'," said Barnardo's chief executive Javed Khan. "Children are the hidden victims of domestic abuse, suffering trauma that can last a lifetime. "I'm disappointed that while the Domestic Abuse Bill may improve access to refuges, it will not help the majority of victims and children who remain in the family home." The NSPCC's senior policy officer Emily Hilton said it was "extremely disappointing that the bill in its current form fails to protect children from the devastating impact of living with domestic abuse, leaving thousands at continued risk because the help they deserve is not in place". 'Economic abuse'The Home Office said the UK's new domestic abuse commissioner, Nicole Jacobs, would consider what support the government can provide children who have been affected by domestic abuse. The legislation will also enshrine a new definition of domestic abuse in law that recognises economic abuse - when a perpetrator controls a victim's finances - as a specific type of the crime. Court protection orders banning perpetrators from contacting a victim or forcing them to take part in alcohol or drug treatment programmes may also be introduced. Support for migrant domestic abuse victims will also be reviewed, while ministers will consider what more can be done to stop the so-called "rough sex" defence being used by perpetrators in court. The majority of the measures in the Domestic Abuse Bill will apply only to England and Wales, but it will create a specific new criminal offence in Northern Ireland of controlling or coercive behaviour, already on the statute book in the rest of the UK. Certain provisions in the bill also apply to court proceedings in Northern Ireland and Scotland. |
Lie detectors have always been suspect. AI has made the problem worse. - MIT Technology Review Posted: 13 Mar 2020 12:00 AM PDT MMU put out a press release in 2003 touting the technology as a new invention that would make the polygraph obsolete. "I was a bit shocked," Rothwell said, "because I felt it was too early." The US government was making numerous forays into deception-detection technology in the first years after 9/11, with the Department of Homeland Security (DHS), Department of Defense (DoD), and National Science Foundation all spending millions of dollars on such research. These agencies funded the creation of a kiosk called AVATAR at the University of Arizona. AVATAR, which analyzed facial expressions, body language, and people's voices to assign subjects a "credibility score," was tested in US airports. In Israel, meanwhile, DHS helped fund a startup called WeCU ("we see you"), which sold a screening kiosk that would "trigger physiological responses among those who are concealing something," according to a 2010 article in Fast Company. (The company has since shuttered.) Bandar began trying to commercialize the technology. Together with two of his students, Jim O'Shea and Keeley Crockett, he incorporated Silent Talker as a company and began to seek clients, including both police departments and private corporations, for its "psychological profiling" technology. Silent Talker was one of the first AI lie detectors to hit the market. According to the company, last year technology "derived from Silent Talker" was used as part of iBorderCtrl, a European Union–funded research initiative that tested the system on volunteers at borders in Greece, Hungary, and Latvia. Bandar says the company is now in talks to sell the technology to law firms, banks, and insurance companies, bringing tests into workplace interviews and fraud screenings. Bandar and O'Shea spent years adapting the core algorithm for use in various settings. They tried marketing it to police departments in the Manchester and Liverpool metropolitan areas. "We are talking to very senior people informally," the company told UK publication The Engineer in 2003, noting that their aim was "to trial this in real interviews." A 2013 white paper O'Shea published on his website suggested that Silent Talker "could be used to protect our forces on overseas deployment from Green-on-Blue ('Insider') attacks." (The term "green-on-blue" is commonly used to refer to attacks Afghan soldiers in uniform make against their erstwhile allies.) The team also published experimental results showing how Silent Talker could be used to detect comprehension as well as detection. In a 2012 study, the first to show the Silent Talker system used in the field, the team worked with a health-care NGO in Tanzania to record the facial expressions of 80 women as they took online courses on HIV treatment and condom use. The idea was to determine whether patients understood the treatment they would be getting—as the introduction to the study notes, "the assessment of participants' comprehension during the informed consent process still remains a critical area of concern." When the team cross-referenced the AI's guesses about whether the women understood the lectures with their scores on brief post-lecture exams, they found it was 80% accurate in predicting who would pass and who would fail. The Tanzania experiment was what led to Silent Talker's inclusion in iBorderCtrl. In 2015, Athos Antoniades, one of the organizers of the nascent consortium, emailed O'Shea, asking if the Silent Talker team wanted to join a group of companies and police forces bidding for an EU grant. In previous years, growing vehicle traffic into the EU had overwhelmed agents at the union's border countries, and as a result the EU was offering €4.5 million ($5 million) to any institution that could "deliver more efficient and secure land border crossings ... and so contribute to the prevention of crime and terrorism." Antoniades thought Silent Talker could play a crucial part. When the project finally announced a public pilot in October 2018, the European Commission was quick to tout the "success story" of the system's "unique approach" to deception detection in a press release, explaining that the technology "analyses the micro-gestures of travelers to figure out if the interviewee is lying." The algorithm trained in Manchester would, the press release continued, "deliver more efficient and secure land border crossings" and "contribute to the prevention of crime and terrorism." The program's underlying algorithm, O'Shea told me, could be used in a variety of other settings—advertising, insurance claim analysis, job applicant screening, and employee assessment. His overwhelming belief in its wisdom was hard for me to share, but even as he and I spoke over the phone, Silent Talker was already screening volunteers at EU border crossings; the company had recently launched as a business in January 2019. So I decided to go to Manchester to see for myself. Silent Talker's offices sit about a mile away from Manchester Metropolitan University, where O'Shea is now a senior lecturer. He has taken over the day-to-day development of the technology from Bandar. The company is based out of a blink-and-you'll-miss-it brick office park in a residential neighborhood, down the street from a kebab restaurant and across from a soccer pitch. Inside, Silent Talker's office is a single room with a few computers, desks with briefcases on them, and explanatory posters about the technology from the early 2000s. When I visited the company's office in September, I sat down with O'Shea and Bandar in a conference room down the hall. O'Shea was stern but slightly rumpled, bald except for a few tufts of hair and a Van Dyke beard. He started the conversation by insisting that we not talk about the iBorderCtrl project, later calling its critics "misinformed." He spoke about the power of the system's AI framework in long, digressive tangents, occasionally quoting the computing pioneer Alan Turing or the philosopher of language John Searle. "Machines and humans both have intentionality—beliefs, desires, and intentions about objects and states of affairs in the world," he said, defending the system's reliance on an algorithm. "Therefore, complicated applications require you to give mutual weight to the ideas and intentions of both." O'Shea demonstrated the system by having it analyze a video of a man answering questions about whether he stole $50 from a box. The program superimposed a yellow square around the man's face and two smaller squares around his eyes. As he spoke, a needle in the corner of the screen moved from green to red when he gave false answers, and back to a moderate orange when he wasn't speaking. When the interview was over, the software generated a graph plotting the probability of deception against time. In theory, this showed when he started and stopped lying. The system can run on a traditional laptop, O'Shea says, and users pay around $10 per minute of video analyzed. O'Shea told me that the software does some preliminary local processing of the video, sends encrypted data to a server where it is further analyzed, and then sends the results back: the user sees a graph of the probability of deception overlaid across the bottom of the video. According to O'Shea, the system monitors around 40 physical "channels" on a participant's body—everything from the speed at which one blinks to the angle of one's head. It brings to each new face a "theory" about deception that it has developed by viewing a training data set of liars and truth tellers. Measuring a subject's facial movements and posture changes many times per second, the system looks for movement patterns that match those shared by the liars in the training data. These patterns aren't as simple as eyes flicking toward the ceiling or a head tilting toward the left. They're more like patterns of patterns, multifaceted relationships between different motions, too complex for a human to track—a typical trait of machine-learning systems. The AI's job is to determine what kinds of patterns of movements can be associated with deception. "Psychologists often say you should have some sort of model for how a system is working," O'Shea told me, "but we don't have a functioning model, and we don't need one. We let the AI figure it out." However, he also says the justification for the "channels" on the face comes from academic literature on the psychology of deception. In a 2018 paper on Silent Talker, its creators say their software "assumes that certain mental states associated with deceptive behavior will drive an interviewee's [non-verbal behavior] when deceiving." Among these behaviors are "cognitive load," or the extra mental energy it supposedly takes to lie, and "duping delight," or the pleasure an individual supposedly gets from telling a successful lie. But Ewout Meijer, a professor of psychology at Maastricht University in the Netherlands, says that the grounds for believing such behaviors are universal are unstable at best. The idea that one can find telltale behavioral "leakages" in the face has roots in the work of Paul Ekman, an American psychologist who in the 1980s espoused a now-famous theory of "micro-expressions," or involuntary facial movements too small to control. Ekman's research made him a best-selling author and inspired the TV crime drama Lie to Me. He consulted for myriad US government agencies, including DHS and DARPA. Citing national security, he has kept research data secret. This has led to contentious debate about whether micro-expressions even carry any meaning. Silent Talker's AI tracks all kinds of facial movement, not Ekman-specific micro-expressions. "We decomposed these high level cues into our own set of micro gestures and trained AI components to recombine them into meaningful indicative patterns," a company spokesperson wrote in an email. O'Shea says this enables the system to spot deceptive behavior even when a subject is just looking around or shifting in a chair. "A lot depends on whether you have a technological question or a psychological question," Meijer says, cautioning that O'Shea and his team may be looking to technology for answers to psychological questions about the nature of deception. "An AI system may outperform people in detecting [facial expressions], but even if that were the case, that still doesn't tell you whether you can infer from them if somebody is deceptive … deception is a psychological construct." Not only is there no consensus about which expressions correlate with deception, Meijer adds; there is not even a consensus about whether they do. In an email, the company said that such critiques are "not relevant" to Silent Talker and that "the statistics used are not appropriate." Furthermore, Meijer points out that the algorithm will still be useless at border crossings or in job interviews unless it's been trained on a data set as diverse as the one it will be evaluating in real life. Research shows that facial recognition algorithms are worse at recognizing minorities when they have been trained on sets of predominantly white faces, something O'Shea himself admits. A Silent Talker spokesperson wrote in an email, "We conducted multiple experiments with smaller varying sample sizes. These add up to hundreds. Some of these are academic and have been publish [sic], some are commercial and are confidential." However, all the published research substantiating Silent Talker's accuracy comes from small and partial data sets: in the 2018 paper, for instance, a training population of 32 people contained twice as many men as women and only 10 participants of "Asian/Arabic" descent, with no black or Hispanic subjects. While the software presently has different "settings" for analyzing men and women, O'Shea said he wasn't certain whether it needed settings for ethnic background or age. After the pilot of iBorderCtrl was announced in 2018, activists and politicians decried the program as an unprecedented, Orwellian expansion of the surveillance state. Sophie in 't Veld, a Dutch member of the European Parliament and leader of the center-left Democrats 66 party, said in a letter to the European Commission that the Silent Talker system could violate "the fundamental rights of many border-crossing travelers" and that organizations like Privacy International condemned it as "part of a broader trend towards using opaque, and often deficient, automated systems to judge, assess, and classify people." The opposition seemed to catch the iBorderCtrl consortium by surprise: though initially the European Commission claimed that iBorderCtrl would "develop a system to speed up border crossings," a spokesperson now says the program was a purely theoretical "research project." Antoniades told a Dutch newspaper in late 2018 that the deception-detection system "may ultimately not make it into the design," but, as of this writing, Silent Talker was still touting its participation in iBorderCtrl on its website. Silent Talker is "a new version of the old fraud," opines Vera Wilde, an American academic and privacy activist who lives in Berlin, and who helped start a campaign against iBorderCtrl. "In some ways, it's the same fraud, but with worse science." In a polygraph test, an examiner looks for physiological events thought to be correlated with deception; in an AI system, examiners let the computer figure out the correlations for itself. "When O'Shea says he doesn't have a theory, he's wrong," she continues. "He does have a theory. It's just a bad theory." However often critics like Wilde debunk it, the dream of a perfect lie detector just won't die, especially when glossed over with the sheen of AI. After DHS spent millions of dollars funding deception research at universities in the 2000s, it tried to create its own version of a behavior-analysis technology. This system, called Future Attribute Screening Technology (FAST), aimed to use AI to look for criminal tendencies in a subject's eye and body movements. (An early version required interviewees to stand on a Wii Fit balance board to measure changes in posture.) Three researchers who spoke off the record to discuss classified projects said that the program never got off the ground—there was too much disagreement within the department over whether to use Ekman's micro-expressions as a guideline for behavior analysis. The department wound down the program in 2011. Despite the failure of FAST, DHS still shows interest in lie detection techniques. Last year, for instance, it awarded a $110,000 contract to a human resources company to train its officers in "detecting deception and eliciting response" through "behavioral analysis." Other parts of the government, meanwhile, are still throwing their weight behind AI solutions. The Army Research Laboratory (ARL) currently has a contract with Rutgers University to create an AI program for detecting lies in the parlor game Mafia, as part of a larger attempt to create "something like a Google Glass that warns us of a couple of pickpockets in the crowded bazaar," according to Purush Iyer, the ARL division chief in charge of the project. Nemesysco, an Israeli company that sells AI voice-analysis software, told me that its technology is used by police departments in New York and sheriffs in the Midwest to interview suspects, as well as by debt collection call centers to measure the emotions of debtors on phone calls. The immediate and potentially dangerous future of AI lie detection is not with governments but in the private market. Politicians who support initiatives like iBorderCtrl ultimately have to answer to voters, and most AI lie detectors could be barred from court under the same legal precedent that governs the polygraph. Private corporations, however, face fewer constraints in using such technology to screen job applicants and potential clients. Silent Talker is one of several companies that claim to offer a more objective way to detect anomalous or deceptive behavior, giving clients a "risk analysis" method that goes beyond credit scores and social-media profiles. A Montana-based company called Neuro-ID conducts AI analysis of mouse movements and keystrokes to help banks and insurance companies assess fraud risk, assigning loan applicants a "confidence score" of 1 to 100. In a video the company showed me, when a customer making an online loan application takes extra time to fill out the field for household income, moving the mouse around while doing so, the system factors that into its credibility score. It's based on research by the company's founding scientists that claims to show a correlation between mouse movements and emotional arousal: one paper that asserts that "being deceptive may increase the normalized distance of movement, decrease the speed of movement, increase the response time, and result in more left clicks." The company's own tests, though, reveal that the software generates a high number of false positives: in one case study where Neuro-ID processed 20,000 applications for an e-commerce website, fewer than half the applicants who got the lowest scores (5 to 10) turned out to be fraudulent, and only 10% of the those who received scores from 20 to 30 represented a fraud risk. By the company's own admission, the software flags applicants who may turn out to be innocent and lets the company use that information to follow up how it pleases. "There's no such thing as behavior-based analysis that's 100% accurate," a spokesperson told me. "What we recommend is that you use this in combination with other information about applicants to make better decisions and catch [fraudulent clients] more efficiently." |
You are subscribed to email updates from "lie detector test online" - Google News. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
Comments
Post a Comment