Biometrics uses cutting-edge technologies to identify terrorists and criminals. But the practice of distinguishing humans based on intrinsic physical or behavior traits goes back thousands of years. 
 
There’s evidence that fingerprints were used on clay tablets during Babylonian business transactions in 500 BC. Fourteenth century Chinese merchants used children’s palms and footprints to distinguish them. And in early Egyptian history, traders were differentiated by their physical characteristics. 
 
By the mid-1800s, the industrial revolution sparked rapid city growth, and a standard form of identifying the general public — and criminals — was necessary. Some police adopted the Bertillon system (a.k.a. anthropometrics), invented in France, which recorded arm-length, height and other body measurements on index cards. However, with no standards in place, errors were frequent. Measuring one metric — a fingerprint — became the method of choice in the late 1800s when Edward Henry, inspector general of police in Bengal, India, created the Henry System, a classifying system that’s still used today. 
 
With the widespread use of computers in the late 20th century, new possibilities for digital biometrics emerged. Although the idea to use the iris for identification purposes was suggested in the 1930s, the first iris recognition algorithm wasn’t patented until 1994 and became available commercially the next year. 
 
At the 2001 Super Bowl in Tampa, Fla., face recognition was used to capture an image of each of the 100,000 fans via a security camera and checked electronically against mug shots from the Tampa police. Federal government coordination started in 2003 with the National Science and Technology Council establishing an official subcommittee on biometrics, and a year later the Department of Defense implemented the Automated Biometric Identification System (ABIS) to help track and identify national security threats.
Lauren Katims Nadeau  |  Contributing Writer