In the wake of the Brandon Mayfield case (2004) which raised serious questions about the accuracy of fingerprint identification by the FBI, the National Academy of Sciences was asked to perform a scientific assessment of the accuracy and reliability of latent fingerprint identification in criminal cases. Initial results were published in:
Proceedings of the National Academy of Sciences (PNAS)
Bradford T. Ulery, 7733–7738, doi: 10.1073/pnas.1018707108
Accuracy and reliability of forensic latent fingerprint decisions
Bradford T. Ulery (a), R. Austin Hicklin (a), JoAnn Buscaglia (b),1, and Maria Antonia Roberts (c)
Edited by Stephen E. Fienberg, Carnegie Mellon University, Pittsburgh, PA, and approved March 31, 2011 (received for review December 16, 2010)
ABSTRACT
The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs. The fingerprints were selected to include a range of attributes and quality encountered in forensic casework, and to be comparable to searches of an automated fingerprint identification system containing more than 58 million subjects. This study evaluated examiners on key decision points in the fingerprint examination process; procedures used operationally include additional safeguards designed to minimize errors. Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.
http://www.pnas.org/content/108/19/7733.full
Authors
Bradford T. Ulery
(a) Noblis, 3150 Fairview Park Drive, Falls Church, VA 22042;
R. Austin Hicklin (a) Noblis, 3150 Fairview Park Drive, Falls Church, VA 22042;
JoAnn Buscaglia (b) Counterterrorism and Forensic Science Research Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135; and
Maria Antonia Roberts (c) Latent Print Support Unit, Federal Bureau of Investigation Laboratory Division, 2501 Investigation Parkway, Quantico, VA 22135
Whether a 0.1 percent false positive rate is “small” is a subjective value judgement. Would you drive across a bridge that had a 1 in 1000 (0.1 percent) chance of collapsing and killing you as you drove across it? No, probably not.
In addition, the 0.1 percent false positive rate is based on a small sample of less than 1000 test cases, 744 pairs of latent and exemplar fingerprints. The Federal fingerprint databases such as the ones used in the Brandon Mayfield case have millions of people in them and may eventually have all US citizens (over 300 million people) in them. How does this “small” rate extrapolate when a fingerprint is compared to every fingerprint in the US or the world?
One might wonder why such an assessment was not done a long time ago.
This is a report on the Brandon Mayfield case:
https://oig.justice.gov/special/s0601/exec.pdf
The National Research Council also published a detailed report Strengthening Forensic Science in the United States: A Path Forward in 2009 addressing the scientific issues raised by the Mayfield case and other questions about the scientific validity of forensic science methods.
Fingerprint identification: advances since the 2009 National Research Council report by Christophe Campod (Philos Trans R Soc Lond B Biol Sci. 2015 Aug 5; 370(1674): 20140259.
doi: 10.1098/rstb.2014.0259) has a summary of work on the issue since the 2009 National Research Council Report.
The bottom line is fingerprints are much more accurate than random chance but hardly infallible as used to be widely believed.
(C) 2017 John F. McGowan, Ph.D.
Credits
The fingerprint image is from the United States National Institute of Standards and Technology (NIST) by way of Wikimedia Commons and is in the public domain.
About the Author
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).