Clicky

mobile btn
Thursday, December 19th, 2024

NIST, Michigan State researchers develop algorithm to automate fingerprint identification process

An algorithm that automatically determines how much useful information is contained in latent crime scene fingerprints was recently developed by scientists at the National Institute of Standards and Technology (NIST) and Michigan State University.

During the crime scene discovery process, the first step for forensic investigators is to determine how much useful information a latent print contains.

“This first step is standard practice in the forensic community,” Anil Jain, co-author of the study and MSU computer scientist, said. “This is the step we automated.”

Should a print contain sufficient useful information, it is submitted to a database of known fingerprints called the automated fingerprint identification system (AFIS), which then returns a list of potential matches. However, the initial decision on fingerprint quality is a critical step in criminal evidence building.

Prior to the algorithm’s development, fingerprint analyzation was an inherently subjective process and occasionally led to identification errors for investigators. According to NIST, automating that process will allow fingerprint examiners to process evidence more reliably and efficiently, leading to a reduction of backlogs and solving crimes more quickly.

For their study, MSU and NIST researchers brought together 31 fingerprint experts to analyze 100 latent prints and grade them on a scale of one to five. Those scores were then used to train the algorithm to determine how much useful information a latent print contains.

After the machine learning process was complete, the researchers tested their algorithm by allowing it to score an entirely new set of prints. From there, the machine’s scored prints were fed into AFIS software which was connected to a database of more than 250,000 clear, rolled fingerprints.

Based on the results from AFIS, the researchers found that the scoring algorithm performed slightly better than the human examiners used in the study.

“We’ve run our algorithm against a database of 250,000 prints, but we need to run it against millions,” Elham Tabassi, co-author of the study and NIST computer engineer, said “An algorithm like this has to be extremely reliable, because lives and liberty are at stake.”