When Brian Coates meets with police officers to tell them about a new computerized system he is using to produce images of criminal suspects, he quickly makes the point that most of them can’t successfully describe their own partners’ distinctive features. The point, says the Pittsburgh Post-Gazette, is that we don’t recognize faces — even the ones of people we live with — by being able to recall their eyes, noses, ears or eyebrows. Instead, we see and know their faces as a whole. That’s the approach embodied in the computerized composite system Coates is field testing on the east coast of England.
Known as EvoFIT, it has been developed at the University of Central Lancashire in western England by a team led by psychology professor Charlie Frowd. EvoFIT tries to mimic the brain’s face recognition system by showing witnesses a series of face images that have been morphed by the computer, and then asking them to select the ones that most closely match their recollection. That’s a sharp departure from older computerized systems like E-FIT, which asked witnesses to select individual noses, eyes, ears and other features and place them on a facial template. In early tests, both E-FIT and EvoFIT showed about a 20 percent success rate of someone being able to identify a person from a composite, but those were lab tests that allowed the artists to create images immediately after a witness had seen a face. When witnesses waited two days between seeing a photo and helping prepare a composite, the E-FIT method dropped to 5 percent identification rates. In real-world field tests, EvoFIT is getting naming rates on its composites of about 25 percent, and lab tests offer hope that soon the system will achieve rates of almost 60 percent.