Commencing from November 1, the iBorderCtrl system will be in place at four border crossing points in Hungary, Latvia and Greece with countries outside EU. It aims to facilitate faster border crossings for travelers while weeding out potential criminals or illegal crossings.
Developed with €5 million in EU funding from partners across Europe, the pilot project will be operated by border agents in each of the trial countries and led by the Hungarian National Police.
Those using the system will first have to upload certain documents like passports, along with an online application form, before being assessed by the virtual, retina-scanning border agent.
The traveler will simply stare into a camera and answer the questions one would expect a diligent human border agent to ask, according to New Scientist.
“What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”
But unlike a human border guard, the AI system is analyzing minute micro-gestures in the traveler’s facial expression, searching for any signs that they might be telling a lie.
If satisfied with the crosser’s honest intentions, the iBorderCtrl will reward them with a QR code that allows them safe passage into the EU.
Unsatisfied however, and travelers will have to go through additional biometric screening such as having fingerprints taken, facial matching, or palm vein reading. A final assessment is then made by a human agent.
Like all AI technologies in their infancy, the system is still highly experimental and with a current success rate of 76 percent, it won’t be actually preventing anyone from crossing the border during its six month trial. But developers of the system are “quite confident” that accuracy can be boosted to 85 percent with the fresh data.
READ MORE: Facial recognition database? Google’s new art selfie app sparks privacy concerns
However, greater concern comes from civil liberties groups who have previously warned about the gross inaccuracies found in systems based on machine-learning, especially ones that use facial recognition software.
In July, the head of London’s Metropolitan Police stood by trials of automated facial recognition (AFR) technology in parts of the city, despite reports that the AFR system had a 98 percent false positive rate, resulting in only two accurate matches.
The system had been labelled an “Orwellian surveillance tool,” by civil liberties group, Big Brother Watch.
We See The World From All Sides and Want YOU To Be Fully InformedIn fact, intentional disinformation is a disgraceful scourge in media today. So to assuage any possible errant incorrect information posted herein, we strongly encourage you to seek corroboration from other non-VT sources before forming educated opinion. In addition, to get a clear comprehension of VT's independent non-censored media, please read our Policies and Disclosures.
Due to the nature of uncensored content posted by VT's fully independent international writers, VT cannot guarantee absolute validity. All content is owned by the author exclusively. Expressed opinions are NOT necessarily the views of VT, other authors, affiliates, advertisers, sponsors, partners, or technicians. Some content may be satirical in nature. All images are the full responsibility of the article author and NOT VT. About VT - Comment Policy
TSA AI: “Have you ever criticized or been critical of Israel?”
76 % is attainable using astrology. 85 % is attainable using a variety of training methods. So, the AI getting 85% is not impressive.
The 15% might contain .01 % actual problems, which can be overcome using a one week training regimen regarding facial movements. In other words, the progress for AI is easily countered by those who seek to. The benefit would be removal of bias, not accuracy. so far , but this stuff is going to take leaps and bounds in ways we can’t see yet, the elimination of passports and ID being one , and analysis of probable movement another
Comments are closed.