"What we're looking for is changes in human physiology," said Doug Derrick, a member of the University of Arizona team developing the technology, reported CNN. "We've had great success in reliably detecting these anomalies - things that people can't really detect."
The AVATAR kiosk is being tested at the Dennis DeConcini Port in Nogales, Ariz., on low-risk travelers who have been preapproved by human screeners as part of the CPB's voluntary Trusted Traveler program. Approved travelers must pass a five-minute interview with the kiosk, which displays an animated face that asks yes or no questions in English or Spanish and uses sensors to detect if a person is lying. A microphone monitors vocal quality, pitch and frequency. An infrared camera monitors eye direction and pupil dilation. And a high-definition camera monitors facial expressions. The kiosk's software uses the input to identify cues that indicate lying.
"People have a hard time detecting small changes in the frequency of the human voice, that a computer is much better at. People are accurate about 54 percent of the time at detecting deception. ... We have got our machine as high as 90 percent in the lab," Derrick said.
The AVATAR kiosk's results are returned to a human and in the case that the machine detected a lie, a more thorough interview would follow. The kiosk is in the early stages of testing, CBP spokesman Bill Brooks said, and if successful, AVATAR kiosks could spread.
Originally, the kiosk didn't have a screen displaying an animated face, but port officials discovered that travelers were not responding to questions in a natural way, which caused problems in lie detecting. So the kiosk was given a face, and port officials gave him a name. "We call him Elvis, or Pat," Derrick said.
The AVATAR kiosk is among many new applications being explored in security technology.
The controversial TrapWire surveillance system recently brought to media attention by watchdog organization WikiLeaks uses artificial intelligence to identify potential terrorist threats, according to the TrapWire website.
Following the financial bailouts and the Enron and WorldCom scandals, artificial intelligence-based systems are being developed to identify financial scams such as money laundering or insider trading, New Scientist reported in July.
And increasingly, police departments around the country are using crime-predicting software that analyzes data to identify where police departments should spread their limited resources.