Search

EU ready to Use AI Lie Detector to Check Travelers at Border Points

EU ready to Use AI Lie Detector to Check Travelers at Border Points

The European United is all set to try out a new technology known as AI Lie Detection at its border patrol checkpoints any moment from. The program, called iBorderCtrl, will run for six months at four border crossing points in Hungary, Latvia and Greece with countries outside the EU.

iBorderCtrl is an EU-funded project that uses AI in order to facilitate faster border crossings for travelers. The system will request users to fill out an online application and upload some documents, like their passport, before a virtual border control agent will ask travelers questions after they’ve passed through the checkpoint. Questions include, “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” according to New Scientist. The system reportedly records travelers’ faces using AI to analyze 38 micro-gestures, scoring each gestures. The virtual agent is reportedly customized according to the traveler’s gender, ethnicity, and language.

For travelers who pass the test, they will receive a QR code that lets them through the border. If they don’t, the virtual agent will reportedly get more serious, and the traveler will be handed off to a human agent who will asses their report. But, according to the New Scientist, this pilot program won’t, in its current state, prevent anyone’s ability to cross the border. This is because the program is very much in the experimental phases. In fact, the automated lie-detection system was modeled after another system created by some individuals from iBorderCtrl’s team, but it was only tested on 30 people. In this test, half of the people told the truth while the other half lied to the virtual agent. It had about a 76 percent accuracy rate, and that doesn’t take into consideration the variances in being told to lie versus earnestly lying. “If you ask people to lie, they will do it differently and show very different behavioral cues than if they truly lie, knowing that they may go to jail or face serious consequences if caught,” Maja Pantic, a Professor of Affective and Behavioral Computing at Imperial College London, told New Scientist. “This is a known problem in psychology.” But systems dependent on machine learning, especially ones involving facial recognition technology, are to date still error prone . But that’s not entirely surprising as studies have shown that many facial recognition algorithms have significant error rate issues and bias. These systems have also raised flags with civil liberties groups like the ACLU’s Border Litigation Project, who worry they AI lie detection might lead to more widespread surveillance.

Leave a comment

0 Comments