Saturday, April 30, 2011

Part III of Fingerprinting Under the Microscope: Error Rates and Predictive Value

The Noblis-FBI experiment presented 169 relatively experienced and proficient latent fingerprint examiners (LPEs), who knew they were being tested, with unusually challenging pairs of latent and exemplar prints. The LPEs worked through a total of 17,121 presentations of 744 image pairs (100 pairs per examiner). How did they do? Table S5 of the study gives the answers, but it is a little hard to read. I have extracted parts of it to produce two simpler tables. Table 1 summarizes the results of the LPEs’ efforts for those pairs of prints that they initially deemed of value for individualization (VIn). Table 2 does the same for the pairs that they initially deemed of value for exclusion only (VExO). I ignore the cases in which an LPE judged the latent print unsuitable for comparison and terminated the process at that point.



Table 1. Outcomes for Pairs Judged To Be of Value for Individualization (VIn)
NonmateMateAll
Exclusion36224504072
Identification636633669
Inconclusive45518562311
All4083596910052

Table 2. Outcomes for Pairs Judged To Be of Value for Exclusion Only (VExO)
NonmateMateAll
Exclusion325161486
Identification04040
Inconclusive57720192596
All90222203122


Table 3 adds the numbers in Tables 1 and 2 to describe the outcomes for all pairs judged to be of any value for comparisons.





Table 3. Outcomes for Pairs Judged To Be of Value (VIn or VExO)
NonmateMateAll
Exclusion39476114558
Identification637033709
Inconclusive103238754907
All4985818913174


Many books describe the interpretation of simpler two-by-two tables of binary decisions (like exclusion and identification) for two states of nature (such as nonmates and mates). For discussion in a legal context, see David H. Kaye et al., The New Wigmore on Evidence: Expert Evidence (2d ed. 2011).The row for inconclusives complicates the analysis slightly, as indicated below.

1. False Positives and Sensitivity

A false positive is an opinion that the pair of prints originated from the same finger of the same individual (an inclusion or identification) when, in fact, the exemplar and the latent came from different sources (nonmated pairs).

Only 10,052 (59%) of the presentations were deemed of value for individualization (VIn). Of these, 4,083 were nonmates, and 5,969 were mates. Five examiners (5/169 = 3%) made false identifications. Their answers to a questionnaire did not indicate anything unusual in their backgrounds. Three of them said they were certified (one did not respond to the background survey).

One of the five examiners made two false identifications, making the false positive rate(for pairs of prints deemed VIn

FPRVIn = P(identification | nonmate & VIn) = 6/10052 = 0.1%.

In clinical medicine, “sensitivity” denotes the probability that a diagnostic or screening test (such as a blood test for a disease) will give a positive result when the disease is present. If the test includes a quantitation that gives an “inconclusive” reading when the blood sample is too small, this would reflect an inherent limitation of the test rather than a lack of sensitivity to the disease when applied to an adequate sample. Analogously, the sensitivity of the examiners is the proportion of mated VIn pairs for which the LPEs reached a conclusion that were judged to be identifications. By this reasoning,

Sensitivity = P(identification | mate & VIn & conclusion) = 3663/4113 = 89.1%.

If the examiners’ inability to reach a conclusion after declaring a pair of prints as of value for individualization were treated as detracting from sensitivity, then their sensitivity in this experiment was only

Sensitivity = P(identification | mate & VIn) = 3663/5969 = 61.4%.

Most examiners did not indicate that the 6 pairs that produced false positive errors were difficult comparisons, and for only 2 of the 6 false positives did the LPE making the error describe the comparison as difficult.

In no case did two examiners make the same false positive error. The errors occurred on image pairs where a large majority of examiners made correct exclusions; one occurred on a pair where the majority of examiners judged the comparisons to be inconclusive. Thus, the six erroneous identifications probably would have been detected if blind verification were performed as part of the operational examination process.

Two of the false positive errors involved a single latent, but with exemplars from different subjects. Four of the five distinct latents on which false positives occurred (vs. 18% of nonmated latents) were deposited on a galvanized metal substrate, which was processed with cyanoacrylate and light gray powder. These images were often partially or fully tonally reversed (light ridges instead of dark), on a complex background.

2. False Negatives and Specificity

Whereas the false positive rate was only FPRVIn = 0.1%, the false negative rate for prints deemed of value for identification or exclusion was much larger:

FNRVIn = 450/5969 = 7.5%; FNRVIn+VExO = 611/8169 = 7.5%.

The specificity of a clinical test is the probability that it will report that the disease is absent when the disease actually is absent. Here, if the calculation is limited to pairs that produced definite conclusions,

Specificity = P[exclusion | nonmate & (VIn or VExO) & conclusion] = (3622 + 325) / (3622 + 6 + 325) = 99.8%.

If we regard inconclusives as a sign of the inability to exclude when an exclusion is warranted, however, we get a smaller value:

Specificity = P[exclusion | nonmate & (VIn or VExO)] = (3622 + 325) / (3622 + 6 + 325 + 1856 + 2019) = 79.2%.

Eighty-five percent of examiners made at least one false negative error, distributed across half of the image pairs that were compared. Awareness of previous errors was not correlated with the false negative errors; indeed, 65% of participants said that they were unaware of ever having made an erroneous exclusion after training.

Years of experience were at best weakly correlated with FNRVIn. The correlation coefficient was only 0.15 (p = 0.063). The correlation with certification was not even close to statistical significance (p = 0.871).

3. Posterior Probabilities

False negative and positive rates tell us how LPEs responded to mates and nonmates, but they are not direct measures of the probability that an identification or an exclusion is correct. This posterior probability also depends on the prior probability that a pair is from the same source. The formula that gives the posterior probabilities is Bayes’ rule. Using the proportion of mates in the paired prints deemed VIn (59%), the predictive values were

PPV = P(mate | identification & VIn) = 3663/3669 = 99.8%

and

NPV = P(nonmate | exclusion & VIn) = 3622/4072 = 88.9%.

Using the proportion for all pairs designated as of value for either individualization or exclusion gives essentially the same values:

PPV = P(mate | identification & VIn or VExO) = 3701/3709 = 99.8%

and

NPV = P(nonmate | exclusion & VIn or VExO) = 3947/4558 = 86.6%.

In casework, the prevalence of mated pair comparisons varies substantially among organizations, by case type, and by how candidates are selected. Mated comparisons are far more prevalent in cases where the exemplars come from individuals suspected of leaving the latent print because of nonfingerprint evidence than when candidates come from an AFIS trawl. The predictive values given above therefore would not apply in most cases. The final installment will discuss a much better way to use the experiment to inform a jury or jury about the value of a fingerprint identification.

Wednesday, April 27, 2011

Part II of Fingerprinting under the Microscope: Examiners Studied and Methods

The Noblis-FBI study mentioned yesterday offers insights into the validity and reliability of the latent fingerprint examination process, but inferences about actual casework are necessarily limited by the design of the study. Before reaching conclusions about latent print examinations in general, one needs to consider the representativeness of (1) the sample of examiners studied, (2) the fingerprint pairs they examined, and (3) the conditions of the examinations.

The description that follows is modified and condensed from the more complete supplement to the PNAS study. My impression is that the examiners tested were better than average at their work and that they were motivated to do well in the exercise. On the other hand, they had unusually challenging pairs of prints to examine, and they had to conduct the examinations with a somewhat confining procedure.

1. Examiners Tested

By soliciting the examiners at the 2009 International Association for Identification (IAI) International Educational Conference, at SWGFAST, and by direct contact with various forensic organizations, the researchers obtained 169 volunteers. (Three sent in partial results that were excluded from the analyses. Employers encouraged or required some "volunteers" to participate.) Because the subjects were not randomly selected, generalizing the results to all latent print examiners (LPEs) would be dangerous. The "healthy volunteer" bias is well known in epidemiology, and the volunteers here are likely to be an above-average group. Based on the reports provided by 159 out the 169 participating LPEs, the group had the following characteristics:

AgeMedian 39
Mean 42
EducationCollege degree 50%
Graduate or professional degree 25%
EmployerFederal gov't 48%
State or local gov't 44%
Accredited 83%
Experience as LPELess than 5 yrs 21%
Less than 10 years 49%
trainees 4%
Certification Not certified 18%
Testified in courtNever 11%
Within past year 60%

2. Fingerprint pairs

Fingerprint impressions for the study were collected at the FBI Laboratory and at Noblis (from employees?). Latents and mated exemplars were selected from the resulting set of images to include a range of characteristics and quality and to be broadly representative of prints encountered in casework. But the exemplar data included a abnormally large proportion of poor-quality exemplars and the latents included many of relatively low quality, to evaluate the consensus among examiners in making value decisions.

The fingerprint data included 356 latents, from 165 distinct fingers of 21 distinct subjects; and 484 exemplars. These were combined to form 744 distinct image pairs, with each pair including one latent and one exemplar. There were 520 mated pairs and 224 non-mated pairs. The large proportion of mated pairs was intended to compensate for the higher proportion of poor quality latents among the mated pairs.

Prints to be compared were selected for difficulty. A subset, with disproportionately poor quality, of mated pairs were selected from the latents and exemplars. The non-mated pairs were designed to yield difficult comparisons. Unusually similar exemplars in the nonmates came from searches of the FBI’s Integrated Automated Fingerprint Identification Ssystem (IAFIS), which included exemplars from over 58 million persons with criminal records.

3. Conditions of the Examinations

Noblis developed software that presented latent and exemplar fingerprint images, provided limited image processing capabilities, and recorded test responses. Image processing capabilities were limited. The tests were distributed on DVDs to examiners who were given several weeks to complete the test. Each of the 169 examiners was initially assigned 100 image pairs.

They spent a median of 8 total hours on the test, over multiple sittings. Participants were not told what proportion of the image pairs were mated. Participants were instructed that it was imperative that they conduct their analyses and comparisons in this study with the same diligence that they would use in performing casework.

For each image pair, the latent was presented for analysis, and the examiner was asked if the image was of value for individualization (VIn); if the image was not VIn, the examiner was asked if the image was of value for exclusion only (VExO). If the image of the latent print was neither VIn nor VExO, the exemplar was not presented for comparison; otherwise, the exemplar was presented and the examiner was required to make a decision of individualization, exclusion, or inconclusive. Examiners were able to review and correct their responses before proceeding to the next comparison, but could not revisit previous comparisons, or skip comparisons and return to them later.

Stay tuned for the results.

Tuesday, April 26, 2011

Fingerprinting under the Microscope: A Controlled Experiment on the Accuracy and Reliability of Latent Print Examinations (Part I)

Fingerprint analysis has been a definitive method of personal identification in criminal investigations for more than 100 years. The examination of fingerprints (as well as palm and sole prints) is known as “friction ridge skin analysis,” or latent print analysis. It consists of a series of steps involving experience-based comparisons of the impressions left by the ridges of foot or hand surfaces (the latent print) against a known, or exemplar print. The courts have accepted fingerprint evidence without challenge for most of the past century. However, several high profile cases in the United States and abroad have highlighted the fact that human errors can occur, and litigation over the evidentiary reliability of latent print examinations has increased dramatically in the last decade or so.1



David Stout, Report Faults F.B.I.'s Fingerprint Scrutiny in Arrest of Lawyer, New York Times, Nov. 17, 2004

The Federal Bureau of Investigation wrongly implicated an Oregon lawyer in a deadly train bombing in Madrid because the F.B.I. culture discouraged fingerprint examiners from disagreeing with their superiors, a panel of forensic experts has concluded. ...


Charlene Sweeney, Lord Advocate to Appear Before Shirley McKie Fingerprint Inquiry, The Times (London), Oct. 21, 2008

Ms McKie, a former policewoman ... denied leaving a print at the scene [of a murder], even though fingerprint experts working for the Scottish Criminal Record Office maintained that it was hers. ... Ms McKie was charged with perjury. ... In February 2006, following a long battle to clear her name, Ms McKie was awarded an out-of-court settlement of £750,000 ... . ... [T]wo US fingerprint experts ... helped to clear Ms McKie of perjury by proving in court that the mark — known as “Y7” — did not belong to her.

By and large, the courtroom challenges have failed. Except for one unreported decision of a state trial judge in State v. Rose, No. K06-0545 (Md. Cir. Ct. Oct. 19, 2007), one withdrawn opinion of one federal district court judge in United States v. Llera Plaza, Cr. No. 98-362-10, 11, 12, 2002 U.S. Dist. LEXIS 344 (E.D. Pa. Jan. 7, 2002), vacated, 188 F.Supp.2d 549 (E.D. Pa. 2002), and one order of another federal district judge in United States v. Zajac, No. 2:06-cr-00811 CW (D. Utah Sept. 16, 2010), fingerprint examiners have been allowed to testify to unique matches and absolute exclusions obtained under a series of experience- and judgment-based steps known in the trade as ACE-V (for Analysis, Comparison, Evaluation, and Verification).1,2 In 2009, however, a committee of the National Academy of Sciences characterized ACE-V as unvalidated and of doubtful reliability.3 The committee endorsed the view that “fingerprint experts should exhibit a greater degree of epistemological humility. Claims of ‘absolute’ and ‘positive’ identification should be replaced by more modest claims about the meaning and significance of a ‘match.’”3 (quoting 4)

“More modest claims” would be informed by data on “the known or potential rate of error.”5 Such data could come from various experiments. For example, an expert panel of latent fingerprint examiners could conduct its own evaluations of a large sample of previous casework (blinded to the earlier outcomes). The results would test the conclusions reached in actual casework. However, the true source of the latent prints in these cases would not be known with certainty. The panel’s conclusions would have to be accepted as correct if they are to serve as the measure of the accuracy in the casework. Without proof of the panel's accuracy, the experiment would be subject to the criticism that it seeks to prove one unknown by means of another. One cannot validate astrology by finding that all astrologers agree on their forecasts. Despite this limitation, the expert panel experiment could be revealing. Studies of the predictive power of screening tests in medicine rely on this experimental design when they use a more precise (but still imperfect) test to measure the accuracy of the first resu.

Another design overcomes (at a cost) the lack of perfect knowledge of what is called “ground truth” in the validation of biometric systems for identification. Sacrificing the realism of casework for a perfect measure of accuracy, experimenters can present the analysts with pairs of prints—one latent print and one exemplar (the exemplar coming from known sources). Some of the pairs are “mates”—they come from the same finger. The rest are “nonmates”—they come from fingers of different individuals. After a double-blind presentation of the mated and nonmated pairs to the latent print examiners, the experimenters can measure the following rates of accuracy and error: positive associations for the mates (sensitivity) and nonmates (false positives), and exclusions for the nonmates (specificity) and mates (false negatives). And, for the particular mix of mates and nonmates in the test set, they also can determine the rate of correct judgments among the identifications (positive predictive value) and exclusions (negative predictive value).1

In an April 25th online release the Proceedings of the National Academy of Sciences, researchers from the Noblis corporation and the FBI reported the results of a large-scale controlled experiment on the accuracy and reliability of latent fingerprint analysis—the first in the long history of criminal fingerprint identification. Noblis, a nonprofit organization, is an offshoot of MIT’s Lincoln Laboratory, which was established in 1951 to build the nation's first air defense system. The results of this research are tantalizing. Its objectives were "to determine the frequency of false positive and false negative errors, the extent of consensus among examiners, and factors contributing to variability in results." I shall summarize and comment on the major findings in later postings.

References

1. David H. Kaye, David E. Bernstein & Jennifer L. Mnookin, The New Wigmore, A Treatise on Evidence:  Expert Evidence (2011).
2. United States v. Mitchell, 365 F.3d 215 (3d Cir. 2004).
3. National Research Council Committee on Identifying the Needs of the Forensic Sciences Community, Strengthening Forensic Science in the United States: A Path Forward (2009).
4. Jennifer L. Mnookin, Fingerprint Evidence in an Age of DNA Profiling, 67 Brooklyn L. Rev. 13 (2001).
5. Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993).
6. Bradford T. Ulerya, R. Austin Hicklina, JoAnn Buscaglia & Maria Antonia Roberts, Accuracy and Reliability of Forensic Latent Fingerprint Decisions, Proc. Nat’l Acad. Sci. (2011), available at http://www.pnas.org/content/early/2011/04/18/1018707108.full.pdf+html.

Sunday, April 17, 2011

Canine DNA Databases to the Rescue

There have been proposals to slowly create a population-wide DNA identification database by having medical workers submit to a centralized, law enforcement database an identifying DNA profile from the blood samples used to screen every newborn for genetic diseases.

According to the Daily Mail, the Italian island of Capri is implementing a similar plan -- for the island's population of 1,000 resident dogs. Some of these animals soil the picturesque, white-washed alleyways on the Mediterranean island. So the mayor is taking advantage of a law requiring dogs to have bloodtests for canine leishmaniasis, a disease transmitted by sandflies, to build a database. Then, samples of any dog mess found on the pavements in the popular holiday island will be sent to a crime laboratory for testing to identify the offending dog and its owner.

Capri is not the first city to plan a local canine database. In 2008, the New York Times reported that the mayor of Petah Tikva, near Tel Aviv, "recruited a small army of 12-year-olds from a local grade school [who] went door to door, persuading dog owners to donate samples and explaining the drawbacks of poop (worms, bacteria, general grossness)." She also "began collecting samples as part of annual pet exams and organized a DNA-donating festival featuring music, performing dogs and a booth for saliva collection." About 90 percent of owners agreed to donate samples when asked.

The project was about more than waste elimination: “We can use this DNA database for important things like genetic research on dog diseases,” the mayor said. “We could also use DNA to identify strays and return them to their parents.”

References

Daily Mail Reporter, Capri to Set Up 'CSI-style' DNA Database to Catch Owners Who Refuse to Clean Up Dog's Mess, Daily Mail Online, 11th April 2011.

Rebecca Skloot, Dog-poop DNA Bank, The, N.Y. Times Mag., Dec. 12, 2008

(Cross-posted from the Double Helix Law blog)

Does ASCLD Recognize the Need for a Research Culture in the Forensic Sciences?

The Crime Lab Minute is a weekly newsletter from ASCLD, the American Society of Crime Laboratory Directors.1 The Minute includes synopses of “Journal articles/Law Reviews” that the group deems noteworthy, and this week’s Minute (April 15, 2011) includes the following entry:

Commentary on The Need for a Research Culture in the Forensic Sciences
P Margot –UCLA L. Rev., 2011

  • Asked to comment on a collective discussion paper by Jennifer L. Mnookin et al.,this Commentary identifies difficulties the authors encountered in defining or agreeing on the subject matter “forensic science” and its perceived deficiencies. They conclude that there is a need for a research culture, whereas this Commentary calls for the development of a forensic science culture through the development of forensic science education fed by research dedicated to forensic science issues. It is a call for a change of emphasis and, perhaps, of paradigm.

By ignoring two of the three parallel comments on the main article—and omitting a description of the article itself—the lab directors convey a distorted picture of the recent literature. As one author of the main article commented in an email to the other authors (including me), this oddly selective form of citation “may be in and of itself a symptom of a lack of research culture.” The consistent message of the series of essays from a diverse set of individuals concerned with improving forensic science is that, to quote Professor Margot, “forensic science needs a sound scientific structure.”2 Slanted newsletters are what one would expect from a “poor and immature profession.”3 The American Society of Crime Laboratories Directors can do better.

Excerpts from the abstracts of the main article and the commentary on it as well as most of the concluding paragraphs of Professor Margot’s more critical commentary follow:

Jennifer L. Mnookin et al., The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 725 (2011)

  • This Article reflects an effort made by a diverse group of participants in these debates [abput forensic science evidence], including law professors, academics from several disciplines, and practicing forensic scientists, to find and explore common ground. . . . We all firmly agree that the traditional forensic sciences in general, and the pattern identification disciplines, such as fingerprint, firearm, toolmark, and handwriting identification evidence in particular, do not currently possess—and absolutely must develop—a well-established scientific foundation. This can only be accomplished through the development of a research culture that permeates the entire field of forensic science. . . . Sound research, rather than experience, training, and longstanding use, must become the central method by which assertions are justified. In this Article, we describe the underdeveloped research culture in the non-DNA forensic sciences, offer suggestions for how it might be improved, and explain why it matters.

Joseph P. Bono,4 Commentary on The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 781 (2011)

  • Finally, after hundreds of pages of “we know how to solve this problem” monologues, a learned treatise appears that goes beyond the NAS Report in addressing the need to strengthen forensic science. The Need for a Research Culture in the Forensic Sciences . . . is one of the first publications to minimize the blame game . . . . This article successfully provides a root cause assessment of the salient issues we face today and contains solutions that those who care about forensic science should consider.

Nancy Gertner,5 Commentary on The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 789 (2011)

  • The National Academy of Sciences’ call for change in forensic sciences will not be successful until lawyers fairly bring these standards to the attention of the courts, and the judges, both district and appellate, rigorously enforce them.

Pierre Margot,6 Commentary on The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 795, 801 (2011)
  • It must be obvious by now that I agree with the authors that research is needed. A poor and immature profession can be the object of study, as proposed by the authors. But what will they do? Study of forensic science can identify shortcomings, such problems like bias, but it may not identify solutions so rapidly. Research in forensic science is sorely needed, but it should address primarily forensic science questions—not questions relating to the application of chemistry, biology, statistics, or psychology. This is how a discipline is built and progresses, and this is where academics should focus their questions. Until then, forensic science will remain a second-rate scientific endeavor and will suffer from continued and justified attacks. It is time that forensic science grows as a fully recognized discipline in its own territory. It should exist on equal terms with other disciplines. It can then cross-fertilize and adopt technological developments in other scientific disciplines, which may allow it to respond to legal demands on much more solid ground.

Notes

1. “ASCLD is “is a nonprofit professional society of crime laboratory directors and forensic science managers dedicated to providing excellence in forensic science through leadership and innovation. The purpose of the organization is to foster professional interests, assist the development of laboratory management principles and techniques; acquire, preserve and disseminate forensic based information; maintain and improve communications among crime laboratory directors; and to promote, encourage and maintain the highest standards of practice in the field.” About ASCLD, http://www.ascld.org/content/about-ASCLDhttp://www.ascld.org/content/about-ASCLD, last visited April 17, 2011.

2. Pierre Margot, Commentary on The Need for a Research Culture in the Forensic Sciences, 58 UCLA L. Rev. 795, 799 (2011)

3. Id. at 801.

4. President, American Academy of Forensic Sciences

5. Judge, U.S. District Court for the District of Massachusetts

6. Vice-Dean, Faculty of Law and Criminal Sciences; Director, School of Criminal Sciences; School of Forensic Science; University of Lausanne

Forceful DNA Collection from Recalcitrant Prisoners

Every state, the federal government, and the District of Columbia has a law mandating the collection of DNA from individuals convicted of specified crimes. But what should happen when a convicted offender refuses to sit quietly and open his mouth for a cheek swab? The Department of Corrections in Michigan acquiesces until it is time to release the prisoner. Some 6,000 Michigan inmates are believed to have refused to give DNA samples.

Delaying collection means that a prisoner who has committed other crimes for which DNA samples are in the unsolved-crimes part of a DNA database cannot be linked to those crimes, possibly for decades. Of course, the incarcerated prisoner is not in a position to reoffend until his release, but if his DNA profile had been in the offender database, a trawl might well have resolved other crimes. And that could have come as a relief to the victims or even cleared a falsely convicted individual.

What, then, should officials do with recalcitrant prisoners? The Muskegon county prosecutor had an idea. He obtained a search warrant from a magistrate-judge, and state police swooped down on 118 inmates at the Earnest C. Brooks and West Shoreline correctional facilities. The tactic is being touted as a model for the state.

This tactic raises two questions. Was it necessary to get the warrant, and should a warrant have been issued? A warrant is what a prosecutor desiring to obtain a sample from an uncooperative suspect during an ongoing investigation might want. If the suspect resisted the order, the police could use reasonable force to execute the order. This procedure does not make a lot of sense, however, when the convicted offender serving his sentence is not a suspect in any other crime. There is no significant fact-based question (such as probable cause) for the magistrate to decide. If the statute authorizes the use of force, no warrant is necessary. If it precludes the use of force, the warrant is inappropriate.

Yet, a corrections department spokesman said that the department believes a court order is needed to take DNA from reluctant inmates, except just before their release from prison. At the same time, some Michigan prosecutors and jailers seem to assume that the Michigan statute permits forcible collection at any time,
In reality, Michigan's statutes are far from clear. Section 28.176(4) of the Michigan Compiled Laws reads as follows:
A sample shall be collected by the county sheriff or the investigating law enforcement agency after conviction or a finding of responsibility but before sentencing or disposition as ordered by the court and promptly transmitted to the department of state police. This subsection does not preclude a law enforcement agency or state agency from obtaining a sample at or after sentencing or disposition.
The section is silent as to the use of force. This silence is significant because another section does discuss "refusal to provide samples, penalties; provision of additional samples" -- and this other section does not explicitly authorize force at any time. Instead, section 28.173a reads:
(1) An individual required by law to provide samples for DNA identification profiling who refuses to provide or resists providing those samples is guilty of a misdemeanor punishable by imprisonment for not more than 1 year or a fine of not more than $1,000.00, or both. The individual shall be advised that his or her resistance or refusal to provide samples described in this subsection is a misdemeanor.
This subsection requires prison officials to inform the unconsenting inmate that he will be charged with a misdemeanor if he persists. It does not authorize physical force.

Some support for the Department of Corrections' idea that it can use force if it waits until the time of release comes from another statute governing the Bureau of Pardons and Parole. Section 791.233d(1) provides that
A prisoner shall not be released on parole, placed in a community placement facility of any kind, including a community corrections center or a community residential home, or discharged upon completion of his or her maximum sentence until he or she has provided samples for chemical testing for DNA identification profiling . . . . However, if at the time the prisoner is to be released, placed, or discharged the department of state police already has a sample from the prisoner that meets the requirements of the DNA identification profiling system act, 1990 PA 250, MCL 28.171 to 28.176, the prisoner is not required to provide another sample . . . .
Subsection (3) arguably authorizes forcible collection. It states that "The department may collect a sample under this section regardless of whether the prisoner consents to the collection." It adds that "The department is not required to give the prisoner an opportunity for a hearing or obtain a court order before collecting the sample." However, these provisions do not apply to most prisoners. They pertain only to offenders "serving sentences for certain criminal sexual conduct offenses."

Thus, the Department's use of force to extract DNA samples, even if waits until the time of release, seems illegal except for sexual offenders. Even reading the statutes generously for the police and prosecutors, it seems that they do not authorize force except when (1) the inmate is a sexual offender who (2) "is to be released."

This contradicts not only the Department's broader policy, but also the dragnet search warrant in Muskegon. On what basis can a court issue a search warrant as the magistrate in Muskegon did? There is no individualized suspicion that any of these inmates committed the thousands of unsolved crimes from which DNA samples were obtained and processed for inclusion in the database. The Fourth Amendment demands that "no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." The fact that people have been convicted of one crime is not probable cause to believe that they are guilty of all other, unrelated, unsolved crimes.

The solution to Michigan's problems in collecting DNA samples from prisoners is not dragnet search warrants that lack probable cause or even reasonable suspicion. It is an amendment to section 28.173a of the DNA database statute. Authorizing reasonable force to obtain a DNA sample by scraping cells from the surface of the skin or from the cheek epithelium of a person convicted of a serious criminal offense is within the state's police power. But that, it seems, is not what the state chose to do. Until the legislature endorses the widespread and early use of force to secure DNA samples for inclusion in the state database, however, Michigan's courts, prosecutors, and police should not force feed the state database.

Reference

John S. Hausman & Heather Lynn Peters, Muskegon's DNA Collection in State Prisons May Start a Statewide Trend, Prosecutors Say, Muskegon Chronicle, April 2, 2011

United States v. Terry, 702 F.2d 299, 324 (2d Cir. 1983)(force permissible to obtain an arrestee's fingerprint)

(Cross-posted from the Double Helix Law blog Apr. 17, 2011)