Video Transcript: How Reliable Are Latent Fingerprint Examiners?

Research Conducted by the Miami-Dade Police Department.

Speaking in this video: Brian Cerchiai, CLPE, Latent Fingerprint Examiner, Miami-Dade Police Department

 The goal of the research was to determine if latent finger print examiners can make and be able to make identifications, exclude properly prints not visible to the naked eye. In this case, we had these 13 volunteers leave over 2000 prints on different objects that were round, flat, smooth and we developed them with black powder and tape lifts.

We did the ACE which is analyze compare evaluate. Where we gave latent examiners - 109 latent examiners - unknown finger prints or palm prints and latents to look at and compare to three known sources. So essentially, compare this latent to one of these 30 fingers or one of these six palms.

[Slide text] 109 examiners compared the unknown latent prints to known sources. Can they match the prints correctly?

So as participants were looking at the latent list and comparing them to the subjects, we asked them if they could identify any of those three subjects as being the source of that latent print. In that case, they would call that an identification. If we asked them to exclude, we are basically asking them to tell us that none of those three standards made that latent or were not the source of that latent print.

That ACE verification (ACE-V) process works, secondly, the examiner looks at that comparison and does their own analysis comparison and gives their evaluation of that decision.

When we found that under normal conditions where one examiners made an identification and the second examiner verified that no erroneous identification got passed that second latent examiner. So it had a false positive rate of zero.

[Slide text] With verification, 0% false positive.

So when we are looking at ACE comparisons where one latent examiner looked a print and one latent examiner analyzed compared and evaluate and came up with a decision. We came up- there was a false positive rate which basically an erroneous identification where they identified the wrong source.

[Slide text] Without verification, 3% false positive.

Without verification, there was a three percent error rate for that type of identification. And we also tracked a false negative rate where given those three standards, people were erroneously excluded that source; where you’re given the source, check one of these three people and then you now eliminate that one of those latent print does not come from one of those three people, even though it did. So that would be a false negative. And that false negative rate was 7.5 percent.

[Slide text] Without verification, 7.5% false negative.

And what we did during the third part of our phase in this test was – we were testing for repeat ability and reproduce ability. We sent back answers over - after six months we sent back participants their own answers and we also gave them answers from other participants. But all those answers came back as if they were verifying somebody’s answers.

[Slide text] To test the error rate further, an independent examiner verified comparisons conducted by other examiners.

Under normal conditions we’d give them the source, latent number and basically agree, disagree or inconclusive. With a biased conditions, we’d give them the identification answer that someone identified, given the answer of a verifier. So now, it’s already been verified and now we want them to give a second verification. Having those print verified, ending out those erroneous identification to other examiners not one latent examiner under just regular verification process-not one latent examiner identified that, they caught all those errors. What actually brought the error rate – the reported error rate dropped down to zero.

[Slide text] The independent examiner caught all of the errors, dropping false positive error rate to 0%.

We maintained our regular case load, this was done in the gaps in between, after hours. The hardest part of doing this was not being dedicated researchers. That’s why it took us quite a long time to get this done. Now that it’s finally out here and we are doing things like this - giving presentations this year. We really hope to expand on this research. The results from this study are fairly consistent with those of other studies.

[Slide text] This research project was funded by the National Institute of Justice, award no. 2010-DN-BX-K268. The goal of this research was to evaluate the reliability of the Analytics, Comparison, and Evaluation (ACE) and Analysis, Comparison, Evaluation, and Verification (ACE-V) methodologies in latent fingerprint examinations.

Produced by the Office of Justice Programs, Office of Communications. For more information contact the Office of Public Affairs at: 202-307-0703.

Miami-Dade Police Department Photography provided by Matthew Douglass and Janice Gaitan

Date Created: September 30, 2015