Sunday, April 8, 2018

On the Difficulty of Latent Fingerprint Examinations

This morning the Office of Justice Programs of the Department of Justice circulated an email mentioning a “New Article: Defining the Difficulty of Fingerprint Comparisons.” The article, written a couple of weeks ago, is from the DOJ’s National Institute of Justice (NIJ). [1] It summarizes an NIJ-funded study that was completed two years ago. [2] The researchers published findings on their attempt to measure difficulty in the online science journal PLOS One four years ago. [3]

The “New Article” explains that
[T]he researchers asked how capable fingerprint examiners are at assessing the difficulty of the comparisons they make. Can they evaluate the difficulty of a comparison? A related question is whether latent print examiners can tell when a print pair is difficult in an objective sense; that is, whether it would broadly be considered more or less difficult by the community of examiners.
The first of these two questions asks whether examiners’ subjective assessments of difficulty are generally accurate. To answer this question, one needs an independent, objective criterion for difficulty. If the examiners’ subjective assessments of difficulty line up with the objective measure, then we can say that examiners are capable of assessing difficulty.

Notice that agreement among examiners on what is difficult and what is easy would not transform subjective assessments of difficulty into “objective” ones—anymore than the fact that a particular supermodel would “broadly be considered” beautiful would make her beautiful “in an objective sense.” It would simply mean that there is inter-subjective agreement within a culture. One should not mistake inter-examiner reliability for objectivity.

In psychometrics, a simple measure of the difficulty of a question on a test is the proportion of test-takers who answer correctly. [4] Of course, “difficulty” could have other meanings. It might be that test-takers would think that one item is more difficult than another even though, after struggling with it, they did just as well as they had on an item that they (reliably) rated as much easier. A criterion for difficulty in this sense might be the length of time a test-taker devotes to the question. But the correct-answer criterion is appropriate in the fingerprint study because the research is directed at finding a method of identifying those subjective conclusions that are most likely to be correct (or incorrect).

NIJ’s new article also mentions the hotly disputed issue of whether error probabilities, as estimated by the performance of examiners making a specific set of comparisons, should be applied to a single examiner in a given case. One would think the answer is that as long as the conditions of the experiments are informative of what can happen in practice, group means are the appropriate starting point—recognizing that they are subject to adjustment by the factfinder for individualized determinations about the acuity of the examiner and the difficulty of the task at hand. However, prosecutors have argued that the general statistics are irrelevant to weighing the case-specific conclusions from their experts. The NIJ article states that
The researchers noted that being aware that some fingerprint comparisons are highly accurate whereas others may be prone to error, “demonstrates that error rates are indeed a function of comparison difficulty.” “Because error rates can be tied to comparison difficulty,” they said, “it is misleading to generalize when talking about an overall error rate for the field.”
But the assertion that “it is misleading to generalize when talking about an overall error rate for the field” cannot be found in the 59-page document. When I searched for the string “generalize,” no such sentence appeared. When I searched for “misleading,” I found the following paragraph (p. 51):
The mere fact that some fingerprint comparisons are highly accurate whereas others are prone to error has a wide range of implications. First, it demonstrates that error rates are indeed a function of comparison difficulty (as well as other factors), and it is therefore very limited (and can even be misleading) to talk about an overall “error rate” for the field as a whole. In this study, more than half the prints were evaluated with perfect accuracy by examiners, while one print was misclassified by 91 percent of those examiners evaluating it. Numerous others were also misclassified by multiple examiners. This experiment provides strong evidence that prints do vary in difficulty and that these variations also affect the likelihood of error.
As always, the inability to condition on relevant variables with unknown values “can be misleading” when making an estimate or prediction. But this fact about statistics does not make an overall mean irrelevant. Knowing that there is a high overall rate of malaria in a country is at least somewhat useful in deciding whether to take precautions against malaria when visiting that country—even though a more finely grained analysis of the specific locales within the country could be more valuable. That said, when a difficulty-adjusted estimate of a probability of error becomes available, requiring it to be presented to the triers of fact instead of the group mean would be a sound approach to the relevance objection.

The experiments described in the report to NIJ are fascinating in many respects. In the long run, the ideas and findings could lead to better estimates of accuracy (error rates) for use in court. More immediately, one can ask how the error rates seen in these experiments compare to earlier findings (reviewed in the report and on this blog). But it is hard to make meaningful comparisons. In the first of the three experiments in the NIJ-funded research, 56 examiners were recruited from participants in the 2011 IAI Educational Conference. These examiners (a few of whom were not latent-print examiners) made forced judgments with a time constraint about the association (positive or negative) of many pairs of prints. The following classification table can be inferred from the text of the report:

Truly Positive Truly Negative
Positive Reported 985 37
Negative Reported 163 1107
Total 1148 1144

The observed sensitivity, P(say + | is +), across the examiners and comparisons was 985/1148 = 85.8%, and the observed specificity, P(say – | is –), was 1107/1144 = 96.8%. The corresponding conditional error proportions are 14.2% for false negatives and 3.2% for false positives. These error rates are higher than those in other research, but in those experiments, the examiners could declare a comparison to be inconclusive and did not have to make a finding within a fixed time. These constraints were modified in a subsequent experiment in the NIJ-funded study, but the report does not provide a sufficient description to produce a complete table.

References
1. National Institute of Justice, “Defining the Difficulty of Fingerprint Comparisons,” March 22, 2018, NIJ.gov: https://nij.gov/topics/forensics/evidence/impression/Pages/defining-difficulty-of-fingerprint-comparisons.aspx

2. Jennifer Mnookin, Philip J. Kellman, Itiel Dror, Gennady Erlikhman, Patrick Garrigan, Tandra Ghose, Everett Metler, & Dave Charlton, Error Rates for Latent Fingerprinting as a Function of Visual Complexity and Cognitive Difficulty, May 2016, https://www.ncjrs.gov/pdffiles1/nij/grants/249890.pdf

3. Philip J. Kellman, Jennifer L. Mnookin, Gennady Erlikhman, Patrick Garrigan, Tandra Ghose, Everett Mettler, David Charlton, & Itiel E. Dror, Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty, PLOS One, May 2, 2014, https://doi.org/10.1371/journal.pone.0094617

4. Frederic M. Lord, The Relationship of the Reliability of Multiple-Choice Test to the Distribution of Item Difficulties, 18 Psychometrika 181 (1952).

Friday, March 16, 2018

Likelihood Ratios for Amelia Earhart

Early in the morning of July 2, 1937, Amelia Earhart took off in her twin-engine Lockheed 10E Electra from Papua New Guinea. She was in the midst of a sensational attempt to circle the globe. Her immediate destination was Howland Island. She planned to refuel at this tiny mound. She never made it. The wreckage of her plane and the remains of either Earhart or the navigator who accompanied her have never been located. Or have they?

Earhart probably ran out of fuel while searching for Howland Island. But where? Nikumaroro Island lies some 350 nautical miles south of Howland. Bones found on this uninhabited coral ring in 1939 and sent to Fiji for examination (and now lost) were measured in 1941 by a physician who concluded that they probably belonged to a 45- to 55-year-old male who was around five-and-a-half feet tall.

However, an early release of an article by University of Tennessee anthropologist Richard Jantz, in the University of Florida's new journal, Forensic Anthropology, rejects this conclusion. Arguing that that the skeletal features for sex determination at the time were only weakly probative, Professor Jantz maintains that in 1941, the examiner “could easily have been presented with morphology that he considered male, even though it may have been female.” 1/

As to the lengths of the bones, Dr. Jantz compares estimates for Earhart (inferred from photographs and clothing) with the reported lengths of the Nikumaroro bones using a statistic known as the Mahalanobis distance. By this distance measure for multivariate data, Earhart is more similar to the Nikumaroro bones than 99% of individuals in a sample of bones from 2,776 other people. The sample is not clearly described in the article, but it is well known to forensic anthropologists, and “Jantz told Fox News that 2,776 individuals used in the reference group were all Americans of European ancestry [who] lived during the last half of the 19th century and most of the 20th century ... .” 2/

With respect to this sample, Jantz wrote that:
Earhart’s rank is 19, meaning that 2,758 (99.28%) individuals have a greater distance from the Nikumaroro bones than Earhart, but only 18 (0.65%) have a smaller distance. The rank is subject to sampling variation, so I conducted 1,000 bootstraps of the 2,776 distances, omitting Earhart, then replacing her to determine her rank. Her rank ranged from 9 to 34, the 95% confidence intervals ranging from 12 to 29. If we take the maximum rank resulting from 1,000 bootstraps, 98.77% of the distances are greater and only 1.19% are smaller. If these numbers are converted to likelihood ratios as described by Gardner and Greiner (2006), one obtains 154 using her rank as 19, or 84 using the maximum bootstrap rank of 34. The likelihood ratios mean that the Nikumaroro bones are at least 84 times more likely to belong to Amelia Earhart than to a random individual who ended up on the island.
Let's not get bogged down in the details of the distance measure, bootstrapping, and the conversion to a likelihood ratio. 3/ There is a broader point to note. A likelihood ratio of at least 84 hardly means that “the Nikumaroro bones are at least 84 times more likely to belong to Amelia Earhart than to a random individual” — or even to a random Caucasian-American of the relevant time period. Likelihoods are measures of statistical support for a claim like the one that the particular bones are those of Amelia Earhart. The ratio indicates how much the bone-length data support the Earhart source hypothesis as contrasted to the support those data provide for random-Caucasian-American source hypothesis.

Relative support contributes to the odds in favor of a hypothesis, but it does not express those odds directly. Suppose I pick a coin at random from a box that contains one trick coin (it has heads on both sides) and 128 fair coins. I flip the coin seven times and observe seven heads. The likelihood ratio for the hypotheses of the trick coin as opposed to a fair coin is the probability of the data (seven out of seven heads) for the trick coin divided by the probability for a fair coin. The value of ratio therefore is 1 / (1/2)7 = 1 / (1/128) = 128. But the odds that it is a trick coin are nowhere near 128 to 1. They are 1 to 1. Because it is no more probable that the coin is a trick coin than a fair one, I cannot even say that the preponderance of the statistical evidence favors the trick-coin hypothesis. See Box 1.
Box 1. The Odds on the Trick Coin

Consider what would happen if we repeated the coin picking and tossing experiment 129 times (replacing the picked coin each time). We expect to pick the trick coin from the box only once and hence to see seven heads for that reason one time in 129. We expect to pick fair coins the other 128 times. We also expect that one of the 128 seven-flip tests of these fair coins will produce seven heads. Observing seven heads does not prove that the coin is more likely to be a trick coin than a fair one. We should post the same odds on each hypothesis about the coin. (This heuristic argument easily could be replaced with a more rigorous proof using Bayes' rule.)


Unfortunately, the statement that “the Nikumaroro bones are at least 84 times more likely to belong to Amelia Earhart than to a random individual” sounds like an assertion that the odds on Earhart as opposed to “a random individual” are at least 84 to 1. Mathematically, if there were no other possibilities to consider, these posterior odds would mean that the probability that the bones are Earhart’s is at least 84/(84+1) = 0.988, or just about 99%. If the probability that the bones are from a member of some other ancestral group (such as Micronesians) is, say, 1/10, then the probability for Earhart would decline to 0.889 (about 89%). See Box 2.
Box 2. Going from the Odds of Two Non-exhaustive Hypotheses to a Probability

Let E denote the event that the bones are Earhart’s, let R be the event that they are “a random individual” (among all Caucasian Americans of the time) and let X be the event that they come from someone else. Also, let p, r, and x be the probabilities of each of these events, respectively. Then 84 = p/r, and p+r+x = 1. Substituting and rearranging terms, tt follows that p = 84(1–x) / 85. If x = 0, the probability of E is p = 84/85 = 0.988. If x = 1/10, p = (9/10)×(84/85) = 0.889. 4/
Journalists have picked up on the 99% figure — in some strange ways. The BBC announced that the Forensic Anthropology article “claims they have a 99% match, contradicting an earlier conclusion.” 5/ But being in the upper percentile on a list of distances is not the same as being 99% similar. Fox News crowed  that “Amelia Earhart Disappearance '99 Percent' Solved,” 6/ whatever that may mean.

Other than seeming to conflate the likelihood ratio of 84 with the odds in favor of Earhart, the article does not contend that the probability that the bones are Earhart's is at least 99%. Hewing to the proper interpretation of likelihood as a measure of support for a hypothesis, Professor Jantz wrote that his “analysis ... strongly supports the conclusion that the Nikumaroro bones belonged to Amelia Earhart.” He did go on to mention Bayes' rule and to illustrate what the posterior probability of this hypothesis might be, but following the recommended forensic practice, he did not settle on a specific prior probability. 7/ That probability turns on the non-anthropological information in the case — things like the fact that the Coast Guard cutter off of Howard Island received radio transmissions from Earhart (suggesting that she was not near Nikumaroro Island). Nevertheless, the article seems to propose that the bone lengths in and of themselves prove that the remains probably are Earhart's. It states that:
If [the] sex estimate, can be set aside, it becomes possible to focus attention on the central question of whether the Nikumaroro bones may have been the remains of Amelia Earhart. There is no credible evidence that would support excluding them. On the contrary, there are good reasons for including them. The bones are consistent with Earhart in all respects we know or can reasonably infer. Her height is entirely consistent with the bones. The skull measurements are at least suggestive of female. But most convincing is the similarity of the bone lengths to the reconstructed lengths of Earhart’s bones. Likelihood ratios of 84–154 would not qualify as a positive identification by the criteria of modern forensic practice, where likelihood ratios are often millions or more. They do qualify as what is often called the preponderance of the evidence, that is, it is more likely than not the Nikumaroro bones were (or are, if they still exist) those of Amelia Earhart. If the bones do not belong to Amelia Earhart, then they are from someone very similar to her. And, as we have seen, a random individual has a very low probability of possessing that degree of similarity.
Contrary to the highlighted text, likelihood ratios of 84–154 do not necessarily mean that evidence satisfies the preponderance-of-the-evidence or more-probable-than-not standard of most civil litigation. The legal standard applies to a posterior probability, not to a likelihood ratio standing alone. 8/ Even a large likelihood ratio may not suffice to overcome a small prior probability. (That is what happened in the coin-flipping example of Box 1.) Conversely, even a small likelihood ratio may be enough to boost a prior probability into the more-probable-than-not range.

Professor Jantz recognizes that no one can resolve the historical mystery on the basis of his statistical analysis alone. He writes:
Ideally in forensic practice a posterior probability that remains belong to a victim can be obtained. Likelihood ratios can be converted to posterior odds by multiplying by the prior odds. For example, if we think the prior odds of Amelia Earhart having been on Nikumaroro Island are 10:1, then the likelihood ratios given above become 840–1,540, and the posterior probability is 0.999 in both cases. The prior odds or prior probability pertain to information available before skeletal evidence is considered. It is often impossible to assign specific numbers to the prior probability, because it depends on how the non-osteological evidence is evaluated, and different people will usually evaluate it differently. In jury trials, experts are often advised to testify only to the likelihood ratio developed from the biological evidence. The jury then supplies its own prior odds based on the entire context (e.g., Steadman et al. 2006).
Judging the entire historical record, Jantz adopts a high prior probability (perhaps higher than the 10:1 figure for the prior odds quoted above) to conclude that “[u]ntil definitive evidence is presented that the remains are not those of Amelia Earhart, the most convincing argument is that they are hers.” In other words, the product of the moderately large likelihood ratio and the prior odds (already sufficient to establish a preponderance) is so large that only definitive evidence for an alternative hypothesis could possibly overcome it.

* * *

In the end, do “modern methods produce results that suggest a 99 percent certainty that the bones belonged to Earhart,” as a respected fact-checking website concluded? 8/ Well, the “modern methods” try to exploit the anthropological data more fully than the earlier analyses, but any conclusion about “the certainty that the bones belonged to Earhart” necessarily rests on a judgment of other information as well — the radio transmissions received by the Coast Guard cutter, the failure to spot any signs of Earhart’s presence in a contemporaneous search of the island, other artifacts found on the island in later investigations, and much more.

Professor Jantz is widely reported to have a personal probability of 99% for Earhart as the source of the remains. ABC News, for example, quoted him as stating that "I am 99 percent sure that these bones belong to Amelia Earhart." 10/ This level of belief may be appropriate, based on his review of all the historical information and his latest statistical analysis of the bone lengths. But the article, at least, does not assign a posterior probability to what it presents as “the most convincing argument,” and a forensic anthropologist who did so in court would be relying on knowledge from outside the realm of forensic osteology.

NOTES
  1. Richard L. Jantz, Amelia Earhart and the Nikumaroro Bones: A 1941 Analysis versus Modern Quantitative Techniques, 1(2) Forensic Anthropology 1-16 (2018), available at http://journals.upress.ufl.edu/fa/article/view/525/518.
  2. James Rogers, Amelia Earhart Mystery Solved? Scientist '99 Percent' Sure Bones Found Belong to Aviator, Fox News, Mar. 7, 2018, http://www.foxnews.com/science/2018/03/07/amelia-earhart-mystery-solved-scientist-99-percent-sure-bones-found-belong-to-aviator.html.
  3. The length estimates for Earhart's bones are not exact, bootstrapping is not the same as drawing repeated probability samples from the desired population, and (as discussed in the article) deriving a likelihood ratio involves categorizing continuous measurements into discrete intervals whose size is somewhat arbitrary. Accounting for these sources of uncertainty would produce a broader range of plausible likelihood ratios.
  4. Jantz argues that the sizes of the recovered bones are less typical of Pacific Islanders than of “Euro-Americans,” but the article does not maintain that the probability of a different ancestry is zero or that it should be ignored.
  5. Amelia Earhart: Island Bones 'Likely' Belonged to Famed Pilot, BBC News, Mar. 8, 2018, http://www.bbc.com/news/world-us-canada-43323944.
  6. Rogers, supra note 2.
  7. Cf. Ira M. Ellman & David H. Kaye, Probabilities and Proof: Can HLA and Blood Test Evidence Prove Paternity?, 55 NYU L. Rev. 1131 (1979), available at http://ssrn.com/abstract=1466964.
  8. See, e.g., John Kaplan, Decision Theory and the Factfinding Process, 20 Stan. L. Rev. 1065 (1968); David H. Kaye, Clarifying the Burden of Persuasion: What Bayesian Decision Rules Do and Do Not Do, 3 Int'l J. Evid. & Proof 1 (1999), available at http:/ssrn.com/abstract=2702990
  9. Alex Kasprak, Have Amelia Earhart’s Remains Been Located?, Mar. 15, 2018, https://www.snopes.com/fact-check/amelia-earharts-remains-located/.
  10. E.g., Professor Believes Bones Found on Pacific Island Belong to Amelia Earhart, ABC7 Eyewitness News, http://abc7chicago.com/science/professor-believes-bones-found-on-pacific-island-belong-to-amelia-earhart/3190174/.

Saturday, March 10, 2018

Exposing Source Code in Law and Science

With the increasing use of "probabilistic genotyping software" has come a push for the making the software public. 1/ Indeed, one federal judge in the Southern District of New York went so far as to compel New York City's Office of the Chief Medical Examiner to make public the source code of the program it created to interpret the pattern of peaks and valleys in the graphs used to determine the nature of the DNA giving rise to them. 2/

Whether this will produce a more thorough analysis of how well the software performs (compared to other ways of testing the program, including releasing the code for inspection by defense experts subject to protective orders) remains to be seen. I have heard more than one expert involved in developing such software observe that releasing the source code is not going to do much to help anyone understand how well the program works. Very lengthy and complex programs may not consist entirely of modules that can be tested separately, and it can be difficult for testers who are not developing the software to develop the insight to conduct effective white-box testing. As far as I know, the FAA and the FDA do not insist on disclosure of all source code to approve the use of avionics and medical-device software, respectively.

Thus, a new editorial policy announced by a preeminent scientific journal, Nature, is noteworthy. 3/ It expresses a strong preference — but not an absolute requirement — for making the source code available to peer reviewers and then to the scientific community. Excerpts follow:
... From this week, Nature journal editors handling papers in which code is central to the main claims or is the main novelty of the work will, on a case-by-case basis, ask reviewers to check how well the code works .... Computational science — like other disciplines — is grappling with reproducibility problems, partly because researchers find it difficult to reproduce results based on custom-built algorithms or software. ...

Some journals have for years ... ensured that the new code or software is checked by peer reviewers and published along with the paper. When relevant, Nature Methods, Nature Biotechnology and, most recently, journals including Nature and Nature Neuroscience encourage authors to provide the source code, installation guide and a sample data set, and to make this code available to reviewers for checking. ...

According to the guidelines, authors must disclose any restrictions on a program’s accessibility when they submit a paper. [I]n some cases — such as commercial applications — authors may not be able to make all details fully available. Together, editors and reviewers will decide how the code or mathematical algorithm must be presented and released to allow the paper to be published.

... We also recognize that preparing code in a form that is useful to others, or sharing it, is still not common in some areas of science.

Nevertheless, we expect that most authors and reviewers will see value in the practice. Last year, Nature Methods and Nature Biotechnology between them published 47 articles that hinged on new code or software. Of these, approximately 85% included the source code for review.

[A]lthough many researchers already embrace the idea of releasing their code on publication, we hope this initiative will encourage more to do so.
Notes
  1. See Jason Tashea, Code of Silence: Defense Lawyers Want to Peek Behind the Curtain of Probabilistic Genotyping, ABA J., Dec, 2017, at 18. The article asserts that "companies developing these tools 'black-box.' This means there is limited or no capacity to review the math;therefore, it cannot be independently challenged." Id.at 19. In computer scientists and engineering, however, "black box" is not a pejorative term, and it certainly does not mean that no testing is possible. Quite the contrary, it refers to a type of testing (also known as "behavioral testing") that "independently challenges" the ;program. It does so by checking whether the program's output is correct for different inputs. This type of testing has advantages and disadvantages compared to "white box" testing. Obviously, a combination of both types of testing is more complete than either one by itself.
  2. E.g., Jason Tashea, Federal Judge Releases DNA Software Source Code That Was Used by New York City's Crime Lab, ABA J., Oct. 20, 2017, http://www.abajournal.com/news/article/federal_judge_releases_dna_software_source_code. The article asserts that "[p]robabilistic genotyping does not define a DNA sample itself; rather, it is an interpretive software that runs multiple scenarios, like risk analysis tools used in finance, to analyze the sample." The explanation could be clearer. The New York City OCME's program that was the subject of the judge's order models the stochastic process that produces the "peaks" that indicate the presence of certain features of the DNA in the sample. See David H. Kaye, SWGDAM Guidelines on "Probabilistic Genotyping Systems" (Part 2), Forensic Sci., Stat. & L., Oct. 25, 2015, http://for-sci-law.blogspot.com/2015/10/guidelines-on-probabilistic-genotyping.html
  3. Editorial, Does Your Code Stand Up to Scrutiny?, 555 Nature 142 (2018), https://www.nature.com/articles/d41586-018-02741-4?WT.ec_id=NATURE-20180309.

A DNA Dog Fight

Want to track the neighborhood dogs depositing potentially pathogen-laden poop in public places? Two or three scrappy firms will do it with DNA profiling. The big dog in the market seems to be BioPet Vet Lab of Knoxville, Tennessee, with its PooPrints service. It claims to have "eliminated the dog waste problem in 3,000 properties across the U.S." thanks to its "patented DNA World Pet Registry database." [1]

BioPet sued a Dallas, Texas, competitor who had been a local distributor for PooPrints and then established his own brand.  The new company, PoopCSI, boasted that "We are the only firm used by Federal Prosecutors to link Pet DNA from dog feces to convict a man in the home-invasion and rap [sic] of a woman in Texas. The Canine CODIS database in which [sic] we invented is the first multi-agency forensic DNA database of dogs." [2]

Actually, the "Canine CODIS" is not a Dallas creation, but a joint effort involving the more reputable Veterinary Genetics Laboratory of the University of California at Davis to cope with the serious business of dog fighting competitions. [3] VGL Forensics also does DNA testing for law enforcement agencies in cases of animal attacks and when animal DNA "from hair, saliva, blood, urine, or feces [occurs] during the commission of a crime—from the victim's pet to the suspect or crime scene, and from the suspect's pet to the victim or crime scene." [4]

PoopCSI is now called PET CSI. It accuses PooPrints ("dog poop franchise competitors") of "spread[ing] false rumors and produc[ing] fake press releases," having a "fraudulent business model," using "deceptive trade practices," and "theft of pet DNA intellectual property." [5] PET CSI has the worst possible rating (F) from the Better Business Bureau.

There is some irony in PET CSI's stream-of-consciousness complaint that "They are now attempting to lay claim they invented the DNA pet waste matching service when our lab has been doing this it since 1955 and even on their own website they mention they only got started 2008 and only location as early as last year got started." [5] Neither PET CSI nor DNA profiling methods existed in 1955 -- a scant two years after Watson and Crick elucidated the structure of DNA. Human forensic DNA profiling did not begin until at least 30 years later. [6] 1955 might have been the year that the UC-Davis veterinary lab -- which must be "our lab," as indicated by PET CSI's unacknowledged copying of material from VGL's website -- started, under the name of "the Serology Laboratory, ... established ... for the purpose of verifying parentage for cattle registries" using blood typing. [7]

Also nipping at PooPrints' heels is Mr Dog Poop's CRIME LAB. [8] This Tampa, Florida, upstart contrasts its computer database technology as "star trek" compared to PooPrints' "stone-age" methods. Not only does it offer "Dog poop DNA testing [for] $35/dog," but it conducts "Poop And Run DNA Investigations [for] Only $50/incident." After all, "If the FBI can use DNA technology to enforce the law, why can't HOAs, COAs and Property managers?" [9]

This development dismayed one Washington Post writer, who lamented
Yes, it has come to this: We live in a society where, rather than speaking to one another and gingerly asking neighbors to clean up their dogs’ messes, we mail a portion of said messes to Tennessee in a small bottle so that, using genetic sequencing and mathematical logarithms [sic], the canine hooligan can be identified. [10]
Notes
  1. The DNA Solution for Pet Waste Management, https://www.pooprints.com/. For varying comments from property managers, see Rick Montgomery, Growing Pet DNA Industry Identifies Poop Offenders, Kansas City Star, May 1, 2014, http://www.kansascity.com/news/local/article348234/Growing-pet-DNA-industry-identifies-poop-offenders.html; PooPrints is [sic] Fabricated its Pet Waste Business Nationwide, Ripoff Report, Apr. 30, 2015, https://www.ripoffreport.com/reports/pooprints/nationwide/pooprints-is-fabricated-its-pet-waste-business-nationwide-1225902.
  2. Eric Nicholson,Two Companies, PooPrints and PoopCSI, Are Battling for the Right to DNA Test Dallas' Dog Crap, Dallas Observer, Aug. 20, 2013, http://www.dallasobserver.com/news/two-companies-pooprints-and-poopcsi-are-battling-for-the-right-to-dna-test-dallas-dog-crap-7102482.
  3. University of California at Davis Veterinary Genetics Laboratory, Canine CODIS: Using a CODIS (Combined DNA Index System) to Fight Dog Fighting, https://www.vgl.ucdavis.edu/forensics/CANINECODIS.php.
  4. University of California at Davis Veterinary Genetics Laboratory, VGL Forensics, 2018, https://www.vgl.ucdavis.edu/forensics/index.php.
  5. PET CSI® Difference, 2018, http://www.petcsi.com/pet-csi-difference.
  6. David H. Kaye, The Double Helix and the Law of Evidence (2010).
  7. University of California at Davis Veterinary Genetics Laboratory, About the Veterinary Genetics Laboratory, 2018, https://www.vgl.ucdavis.edu/vgl/about.php.
  8. Compare DNA Dog Poop Services, http://mrdogpoop.com/splash/compare.html.
  9. What Is Mr Dog Poop's® CRIME LAB® Dog Poop DNA Service?, http://mrdogpoop.com.
  10. Karen Heller, Using DNA to Catch Canine Culprits — and Their Owners, Wash. Post, Dec. 26, 2014, available at https://www.washingtonpost.com/lifestyle/style/using-dna-to-catch-canine-culprits--and-their-owners/2014/12/26/8d833fc8-8247-11e4-8882-03cf08410beb_story.html?utm_term=.7c49e9dc734a.
Related Postings
Related News Stories
  • Debra Cassens Weiss, Lawyer Says Condo’s Proposed PooPrints DNA Program Is ‘Absolutely Ridiculous,' ABAJ, May 20, 2010, http://www.abajournal.com/news/article/lawyer_says_condos_suggested_pooprints_dna_program_is_absolutely_ridiculous/
  • Stanley Coren, CSI Meets Dog Poop, Psychology Today,  June 30, 2011, https://www.psychologytoday.com/blog/canine-corner/201106/csi-meets-dog-poop.
  • Danny Lewis, Dog Owners Beware, DNA in Dog Poop Could Be Used to Track You Down, Smithsonian Mag., Mar. 30, 2016, https://www.smithsonianmag.com/smart-news/dog-owners-beware-dna-dog-poop-could-used-track-you-down-180958596/
  • Maria Arcega-Dunn, City Uses DNA to Track Dog Owners Who Don’t Pick Up, Fox 5 (San Diego), Apr. 29, 2015, http://fox5sandiego.com/2015/04/29/san-diego-turns-to-dna-to-trace-dog-owners-who-dont-pick-up/
  • 7NEWS, Colorado Dog Owners Fined Hundreds After Not Picking Up Pet Poop, Denver Post, Nov. 10, 2015, https://www.denverpost.com/2015/11/10/colorado-dog-owners-fined-hundreds-after-not-picking-up-pet-poop/
  • Eli Pace, Breckenridge Weighs DNA Testing Dog Poop After Complaints Pile Up, Summit Daily, Mar. 5, 2018, https://www.summitdaily.com/news/breckenridge-weighs-dna-testing-dog-poop-after-complaints-pile-up/

Wednesday, March 7, 2018

Four Cases and Two Meanings of a Likelihood Ratio

In Transposing Likelihoods, I quoted descriptions of likelihood ratios in four recent cases. The opinion in Commonwealth v. McClellan, No. 2014 EDA 2016, 2018 WL 560762 (Pa. Super. Jan. 26, 2018), is the only one that presents a likelihood as a probability of the evidence given a hypothesis. The opinion refers to “the conclusion that the DNA sample was at least 29 times more probable if the sample originated from Appellant ... than if it originated from a relative to Appellant and two unknown, unrelated individuals.” Evidence that is 29 times more probable under one hypothesis than under some rival hypothesis favors the former hypothesis over the latter.

The other three opinions mischaracterize the likelihood ratio by treating it as a statement about the relative probabilities of the hypotheses themselves. 1/

To give a simple example of the conceptual difference between a likelihood and a probability, suppose a card is drawn from a well shuffled deck, and is the hypothesis that it is a diamond. A witness who never makes a mistake informs us that the card is red. The witness’s report certainly is relevant evidence. It is more probable to occur when the card is a diamond than when it is not. The likelihood ratio simply tells us how many times more probable the evidence is for the hypothesis than for the rival, composite hypothesis of or ♠ or ♣.

The value of the likelihood ratio in this case is 3. To see this, first consider the probability of this evidence E if is true. Because all diamond cards are red, this conditional probability is P(E|) = 1.

The probability of E if or ♠ or ♣ is true is the proportion of red cards among the hearts, spades, and clubs. This proportion is 1/3.

The likelihood ratio for diamonds therefore is P(E | ) / P(E | or ♠ or ♣) = 1 / (1/3) = 3. It is three times more probable to learn that the randomly drawn card is red if it is a diamond than if it is not.

It does not follow, however, that the probability that the card is a diamond given that it is red is three times the probability that it is not a diamond given that it is red. Because half of the red cards are diamonds, the conditional probability of a diamond is P( | E) = ½. Thus, the ratio of the probability of the hypotheses of to non- is only ½ / ½ = 1.  2/

Notes
  1. The magnitude of the ratio in these cases makes the error resulting from the transposition somewhat academic. See False, But Highly Persuasive: How Wrong Were the Probability Estimates in McDaniel v. Brown?, 108 Mich. L. Rev. First Impressions 1 (2009).
  2. Before learning the card's color, the probability it was a diamond was 1/4 (odds of 1:3). After the witness's report, the probability became 1/2 (odds of 1:1). In the terminology of Bayesian inference, the posterior odds of 1:1 are the Bayes factor of 3 times the prior odds of 1:3. That is, 3 x 1:3 = 3:3 = 1:1.