Monthly Archives: January 2010

New York to Use Partial DNA Database Matches that Implicate Relatives

The New York Times reported today that New York’s Commission on Forensic Science has approved allowing “forensic investigators working for the State Police to share information about partial matches with local law enforcement agencies.” The idea is simple. If a crime-scene sample matches a profile in the database at most, but not all loci, the individual from the database is excluded — but his (or her) brother (or other close relative) is much more likely to be the source of the DNA than some random, unrelated individual. Thus, the near-miss in the database is an investigative clue. As ACLU staff (quoted in the article) point out, this procedure effectively expands the size of the database but makes it less accurate when it points to a relative in this indirect fashion.

In the old Science & Law Blog, I wrote (on May 6, 2008) about the Fourth Amendment implications of the practice:

My brother’s DNA: Near-miss DNA searching

California has adopted an aggressive policy toward near-miss DNA searching — something discussed in this blog before. The state is going to compare DNA profiles recovered from crime-scenes to those in its offender database (1) to see if there are any “cold hits” to convicted offenders and arrestees, and (2) to see if there are any almost-matching profiles that are likely to have come from a very close relative.

The first procedure has been upheld in case after case challenging its constitutionality (in the context of convicted offenders). Why would the second procedure be constitutionally defective? According to a Los Angeles Times article of April 26 on the California policy, some lawyers think it is an unreasonable search that might run afoul of the Fourth Amendment. The paper also quotes “Tania Simoncelli, science advisor to the American Civil Liberties Union,” as asserting that “The fact that my brother committed a crime doesn’t mean I should have to give up my privacy!”

This crie de coeur surely is sincere, and it may not be meant as a constitutional argument, but it is interesting to ask whether it supplies a plausible principle for applying the Fourth Amendment. Consider the following case: You have an identical twin brother. He robs a bank, is locked away in prison, and his DNA profile is put in an offender database. This can happen even though his DNA was not evidence in the bank robbery case and had nothing to do with that crime.

While your brother is out of circulation, you break into a house. cutting your hand on the glass of a window that you shattered to gain entry. A tiny bloodstain with your DNA on it is analyzed. The profile is compared to those in the database. It matches the one that is file perfectly — your brother’s — because identical twins have the same DNA sequences. But the police know that your brother was in prison when the house was burgled. They scratch their heads until they realize that he might have an identical twin with identical DNA.

So the police investigate you and find plenty of other evidence against you. Now you are facing trial. You move to exclude evidence that your DNA matches that in the bloodstain on the ground that this discovery is the result of an unreasonable search, arguing that “the fact that my brother committed a crime doesn’t mean I should have to give up my privacy!” Not only that, you contend that the rest of the evidence must be dismissed because all of it is the fruit of this illegal search.

I do not see how anyone (who agrees that convicted-offender databases that include bank robbers are constitutional) can argue that this search infringes the Fourth Amendment. It is too bad that you and your brother share the same DNA profile, but the police have not forced you to surrender your DNA, and you have no right to stop them from checking your brother’s DNA to see if he might be responsible. By checking him, they learn something about you. You might not like it, but let’s face it, this probably is not the first time that your brother got you into trouble.

Of course, the California policy is not limited to identical twins. Furthermore, it involves partial matches and less complete information. All that I have tried to show is that the slogan that “the fact that my brother committed a crime doesn’t mean I should have to give up my privacy!” does not settle any constitutional question. It states the conclusion of what must be a rather complex argument about (1) the privacy of information that identifies a class of individuals and (2) the power of the state to investigate one individual on the basis of information it legitimately obtains from another individual.

* * *

Another argument against near-miss searching is that it is discriminatory. From the old blog (April 9, 2007):

Near-miss DNA Searching

“Familial searching” is back in the news.  60 Minutes had a segment on it last week called “A Not So Perfect Match: How Near-DNA Matches Can Incriminate Relatives of Criminals,” and the LA Times ran an editorial by UCLA Professor Jennifer Mnookin entitled “The Problem with Expanding DNA Searches: They Could Locate Not Just Convicted Criminals But Also Relatives — Violating Privacy.”

The phrase “familial searching” is slightly misleading. As Mnookin notes, when a DNA sample from a crime scene is almost — but not quite — a match to a particular individual in the convicted-offender database, it could well come from a full sibling or a parent or child. As one moves farther out on the family tree, however, it is difficult to distinguish relatives from unrelated individuals with the DNA types listed in the database.

Although one cannot expect too much from short editorials and TV clips, it may be worth noting and commenting on some of the arguments against near-miss searching floated in these media. The major argument offered on the 60 Minutes show was that looking for leads to relatives is “genetic surveillance.” Of course, this is more of a slogan than an argument. Calling the practice “genetic” or “surveillance” does not make it wrong. People would prefer not to come to the attention of the authorities, but what is the underlying right that following these leads violates? Or is the argument not about rights, but policy? Is the unarticulated premise that the police should not have a way of tracking the whereabouts of large numbers of people who are not (yet) known to have done anything wrong? Perhaps, but don’t people become suspects for all kinds of reasons beyond their control all the time?

Professor Mnookin formulates the point somewhat differently when she writes that “[p]ut plainly, it is discriminatory. If I have the bad luck to have a close relative who has been convicted of a violent crime, authorities could find me using familial search techniques. If my neighbor, who has the good fortune to lack felonious relatives, left a biological sample at a crime scene, the DNA database would not offer any information that could lead to her.” The “discrimination” here is that people whose parents, children, or siblings are convicted criminals can be caught. But why is this under-inclusiveness such a serious concern? By this logic, wouldn’t it be equally discriminatory to seek or follow up on leads by interrogating friends of a criminal? To paraphrase the editorial, “If I have the bad luck to have a friend who is willing to talk to the police, authorities could find me using interrogation techniques. If my neighbor, who has the good fortune to lack loose-lipped friends, committed the same crime, the interrogation would not offer any information that could lead to her.” “Discrimination” that arises from “bad luck” is not generally a concern. Something else must be doing the work here.

Another less-than-obvious claim cast in terms of “discrimination” or “fairness” is that “those people who just happen to be related to criminals have not given up their privacy rights as a consequence of their actions. To use a search technique that targets them simply because of who their relatives are is simply not fair.” But it is not apparent that there is any fundamental “privacy right” to be free from becoming the target of an investigation because of one’s associations with individuals who come to the attention of the police. Suppose that I commit a crime all by myself but I have a nosy neighbor who shadowed me. He gets caught committing a totally unrelated crime, and he bargains for a lower sentence by offering to rat on me. Would we say that “those people like me, who just happen to be living next to nosy criminals have not given up their privacy rights as a consequence of their actions. To use a search technique that targets them simply because of who their neighbors are is simply not fair.”?

A more troubling point is that near-miss searching will have a disparate racial and economic impact because racial minorities and less affluent individuals are overrepresented among convicted offenders. Is the disparate impact is acceptable for the convicts but not for their closest relatives? Mnookin points out that in upholding the constitutionality of convicted-offender databases, courts have suggested that offenders lose privacy rights by virtue of their offenses. I am skeptical of this “forfeiture of rights” argument as the ground for upholding convicted-offender databases, but it is a common intuition, and many courts have relied on it to overcome the Fourth Amendment claims of convicted offenders. Notice, however, that the right be free from bodily invasion asserted in those cases has no application to near-miss searching. Under current Fourth Amendment doctrine, no “search” occurs in looking at validly obtained DNA profiles to determine if there are any near matches. That said, the disparate-impact concern remains, at least as a policy matter. The inequity exists with or without near-miss searching, but more people are affected if near-miss searching is performed.

The editorial tosses in a practical argument: “the broader the parameters for partial match searches, the more likely false positives become.” But what is a “false positive” here? It is not a false conviction. If a close relative did not deposit the crime-scene DNA, then it is improbable that DNA testing of this individual will establish a total match. Testing a falsely identified relative thus will exculpate him. This is not to denigrate the individual’s interest in not becoming a “person of interest” to the authorities, even if the interest is temporary, but such false leads are also a concern for the police because they waste time and resources. If the parameters are set so wide as to include large numbers of false leads, then the police will find the technique frustrating, and it will not be used very often. Furthermore, even if the “parameters” were grossly overinclusive, producing many bad near-matches, most of the false leads could be detected in the laboratory with the existing samples from the crime and the nearly matching convicted offenders. If a brother, son, or father of an actual rapist is in the offender database, then he will have the same Y chromosome as the rapist. If the samples do not match at loci on the Y chromosome, then the near-miss offender can be crossed off the list. In this way, false leads to close relatives of an individual in the database can be largely eliminated by testing at Y-STRs or Y-SNPs in rape cases (or others with male offenders).

Professor Mnookin concludes that “as a matter of fairness, it ought to be all or nothing.” Does this mean that (1) either everybody should be in the law-enforcement identification databases or nobody should be, or rather that (2) either everybody should be in the law-enforcement identification databases or only convicted offenders should be? Whichever is intended, she is right about one thing — near-miss searching is a step in the direction of a more universal database.

References

Jeremy W. Peters, New Rule Allows Use of Partial DNA Matches, N.Y. Times, Jan. 25, 2010

McDaniel v. Brown: Prosecutorial and Expert Misstatements of Probabilities Do Not Justify Postconviction Relief — At Least Not Here and Not Now

In The Double Helix and the Law of Evidence (pp. 173-176), I briefly described the Ninth Circuit Court of Appeals’ muddled opinion upholding a writ of habeas corpus in Brown v. Farwell, 525 F. 3d 787 (9th Cir. 2008). Much has happened since then. First, the Supreme Court granted a writ of certiorari to review whether the Ninth Circuit used the correct legal standard and whether it should have considered a letter written by a geneticist at the behest of defense counsel eleven years after Brown’s trial. Second, the Court received a slew of briefs, including one on defendant’s behalf from “20 Scholars of Forensic Evidence.” Third, after scheduling oral argument, the Court decided that it could dispose of the case on the briefs alone. Finally, on January 11, 2010, the Court issued its unanimous per curiam opinion (sub nom. McDaniel v. Brown).

The case arose from the brutal rape in 1994 of a nine-year-old girl in Nevada. A jury convicted Troy Brown on evidence that included a DNA profile (at VNTR loci) that had an estimated population frequency of 1 in 3 million. On redirect examination, however, the prosecutor induced its DNA analyst, Renee Romero, to accept his mischaracterization of this number as the probability that someone unrelated to the defendant was the source of the rapist’s profile. In addition, Ms. Romero testified that the probability of a VNTR match to “the very next child” of the same parents would only be 1/6500 when the actual probability is less than 1/1024. (She did not mention other tests she had done that would have brought the probability closer to her figure. See “False, But Highly Persuasive”: How Wrong Were the Probability Estimates in McDaniel v. Brown, 108 Mich. L. Rev. First Impressions 1 (2009).) Defense counsel neither objected to nor corrected her testimony even though the legal and scientific literature at the time made it indisputable that the prosecution was misconstruing the 1/3,000,000 figure and that the 1/6500 figure was miscomputed.

After losing various appeals and state postconviction petitions, Brown argued for the first time in federal court that the probabilities were incorrectly computed or interpreted and that without the DNA evidence, no reasonable juror could have found him guilty beyond a reasonable doubt. He also argued that trial counsel’s representation of him was so poor as to amount to ineffective assistance of counsel. The district court agreed with both claims. The Ninth Circuit affirmed on the ground that without the “false and highly misleading” DNA evidence, there was insufficient evidence for the conviction and hence a violation of due process under Jackson v. Virginia, 443 U. S. 307 (1979). It did not reach the question of effective assistance of counsel.

The Supreme Court reasoned that the Jackson claim fails because Jackson merely holds that when the evidence against the defendant — whether or not properly admitted according the rules of evidence or the constitution — is insufficient, then, as a matter of due process of law, the conviction cannot stand. In Brown, however, there was “no suggestion that the evidence adduced at trial was insufficient to convict unless some of it was excluded … thus dispos[ing] of [the] Jackson claim.”

As explained in The Double Helix, the more applicable due process claim is that the misstatements about probabilities rendered the conviction fundamentally unfair. The Court barely discussed this “DNA due process” claim, as Brown denominated it. Instead, it insisted that “[r]espondent has forfeited this claim, which he makes for the very first time in his brief on the merits in this Court.”

Although the Court thus avoided the colorable due process issue posed by the admission of the DNA evidence, it is hard to see how Brown could have prevailed even on that belated claim. Admitting the mischaracterized random-match probability may well have been plain error, but it did not rise to the level of a due process violation. See “False, But Highly Persuasive,” supra. The error regarding the probability of a random match to a sibling is sufficiently technical as not to amount to plain error, let alone constitutional error. Moreover, the prosecution could have produced a correctly computed sibling-match probability close to 1/6500. See id. Therefore, the trial court’s failure to exclude the statistics — to which defendant did not object — hardly seems like the type of error that rendered his trial fundamentally unfair.

In any event, having determined that Jackson was of no assistance to Brown and that Brown had “forfeited” the better due process claim, the Supreme Court remanded the case to the Ninth Circuit to consider whether Brown’s trial counsel had performed so dismally as to deprive him of due process of law. McDaniel v. Brown is thus a narrow, procedural holding regarding the scope of federal habeas corpus claims of insufficient evidence.

Even as to the procedural issue, however, the per curiam opinion raised the hackles of two Justices. The Scholar’s Brief importuned the Court to seize the opportunity to condemn misinterpretations of DNA evidence at trial. It argued that the DNA analyst’s testimony was badly flawed and assured that the Court that the defense expert was entirely correct. (The author of the brief, Bill Thompson, is a University of California-Irvine colleague of the letter’s author, Larry Mueller; the two have been called part of the “combine from Irvine.”) Apparently, the brief did not persuade the Justices that Mueller’s computation of the sibling-match probability was correct. In the Court’s jaundiced eyes, the letter’s “claim that [the state’s expert] used faulty assumptions and underestimated the probability of a DNA match between brothers indicates that two experts do not agree with one another, not that [the state’s] estimates were unreliable.” Yet, as the one scientific authority cited in the Court’s opinion — a 1996 Report of the National Academy of Sciences report — indicates, Romero plainly transposed the random-match probability and she miscomputed the sibling-match probability — even on her own assumptions! See”False, But Highly Persuasive,” supra. On these matters, the Scholar’s Brief was correct. Rather than acknowledge this fact, however, the Court only treated Mueller’s two criticisms as hypothetically true. Accepting them solely for the sake of argument, the Court observed that they did not justify exclusion of the DNA evidence in its entirety.

Given the Court’s interpretation of the Jackson claim, however, this discussion of the probabilities is superfluous. If Jackson only means that when the totality of the evidence — admissible or otherwise — is sufficient for conviction, why talk about whether the admissible evidence alone is sufficient? Was the Court giving some credence to the possibility that a modified Jackson claim would be tenable? That is, could due process require a federal habeas court to excise unfounded exaggerations and then to determine whether the reduced corpus of evidence could permit a reasonable juror to convict?

In a concurring opinion, Justices Thomas and Scalia took the Court to task for considering the implications of the Mueller letter on the admissibility of the DNA evidence. Unequivocally rejecting any possibility of a modified Jackson standard like the one that the Ninth Circuit entertained and then misapplied, these Justices

disagree[d] with the Court’s decision to complicate its analysis with an extensive discussion of the Mueller Report. … [T]he report’s attacks on the State’s DNA testimony were not part of the trial evidence and have no place in the Jackson inquiry. … [E]ven if the report had completely undermined the DNA evidence … the panel still would have erred in considering the report to resolve respondent’s Jackson claim. The reason, as the Court reaffirms, is that Jackson claims must be decided solely on the evidence adduced at trial.

The concurring Justices are correct in describing the per curiam opinion’s analysis of the Mueller letter as dicta. But the reason is not that the letter itself was “not part of the trial evidence.” It is that Brown raised the pseudo-Jackson claim rather than the straightforward due process claim about unfair exaggeration in the presentation of DNA evidence. Even if defense counsel had never presented Mueller’s letter to the trial court, he could have relied solely on sources subject to judicial notice to argue on appeal, in state postconviction proceedings, and then again in the federal habeas court proceedings that the trial judge’s failure to correct the prosecution’s mistakes sua sponte deprived him of due process. But Brown did not make this “DNA due process claim” in state court, and the prosecution’s indisputable errors are not relevant to the claim that he did make.

I shall address more fully the pretermitted claim that transposition of a probability can constitute a violation of due process in a later installment. Several courts of appeals considered such claims in the years before DNA testing, and their conclusions are instructive.

Taking Liberties with the Numbers

I am enhancing material that appeared on the Science and Law Blog in the Law Professors Blog Network. That blog, initiated by David Faigman, a colleague and former student, has been deactivated for want of sustained activity.  In April 2009, I remarked there that an article in the California Lawyer exemplified and perpetuated the confusion in the media about DNA database trawls. Credulous bloggers with PhDs seized on the story, which harks back to a controversial article in the Los Angeles Times. I’ll discuss both articles here.

I. Fuzzy Thinking About Math

In “Guilt by the Numbers: How Fuzzy is the Math that Makes DNA Evidence Look So Compelling to Jurors?” award-winning journalist Edward Humes discusses the unusual case of People v. Puckett, No. A121368, Cal. Ct. App., 1st Dist., May 1, 2008). John Puckett, now an elderly man, is appealing his recent conviction for the 1972 murder of Diane Sylvester, a San Francisco nurse. The conviction rests on a cold hit in California’s convicted-offender database at a small number of STR loci (genetic locations). Hume writes that in Puckett, “the prosecution’s expert estimated that the chances of a coincidental match between the defendant’s DNA and the biological evidence found at the crime scene were 1 in 1.1 million.” Id. at 22. Then he adds “there’s another way to run the numbers” which shows that “the odds of a coincidental match in Puckett’s case are a whopping 1 in 3.” Id. “Both calculations,” he maintains, “are accurate. The problem is that they answer different questions.” Id. The explanation, he believes, lies in “a classic statistical puzzle known as the ‘birthday problem.'” Id.

The author’s skill as a writer exceeds his insight as a mathematician. Surely the probability of “a coincidental match” cannot have such fantastically different “accurate” values. Moreover, the birthday problem has almost nothing to do with these numbers. The fuzziness is in the words of the article, not in the math. Only if we define “a coincidental match” can we begin to see what its probability would be and how unlike the birthday problem it is.

Definition 1. The probability of coincidental match is the chance that Mr. Puckett is innocent and the match to him is just a coincidence

The average reader might think that a coincidental match means that Mr. Puckett is innocent and the match to him is just a coincidence.  If this is what it means, however, its probability is neither 1 in 1.1 million nor 1 in 3.  The former figure is the probability that Puckett’s DNA would match if he were the only one whose DNA had been checked and if he were unrelated to the killer. The latter figure is the probability that at least one profile in the California database — not necessarily Puckett’s — would match if no one in the database were the killer.  Notice that both probabilities are conditional — they depend on assumptions about who the real killer is or is not.  They cannot readily be inverted or transposed into the probability of who the real killer is. Under Definition 1, therefore, neither number is an “accurate” statement of the probability of a coincidental match.  Neither one expresses the chance that the match to Mr. Puckett is just a coincidence.

A technical note: This description of the probabilities of 1 in 1.1 million and 1 in 3 assumes, for simplicity, that it was the killer’s DNA that was found near the victim and later typed and that there was no possibility of error in the DNA typing, no ambiguity in the test results, and no selectivity in presenting them. Statisticians will immediately recognize that Bayes’ rule could be used to arrive at the posterior probability of Puckett’s innocence.

Definition 2. The probability of a coincidental match means the chance that Mr. Puckett’s DNA would match (and no other DNA in the database would) if he were not the killer and if he were unrelated to the killer.

This definition refers to the probability of the DNA evidence given the hypothesis of coincidence. Again, neither 1 in 1.1 million nor 1 in 3 expresses this value, but 1 in 1.1 million is a far closer estimate than is 1 in 3. The reason is that the DNA evidence includes not merely the datum that Puckett’s DNA matches, but the additional information that no one else’s does. If Puckett were the only one tested (a database of size 1) and if he were innocent, then the chance that he would match would be 1 in 1.1 million. Now we test an unrelated second person. The chance that this individual would match if he were innocent also is 1 in 1.1 million, and the chance that he would match if he were the killer is 1. The chance that Puckett matches and the other man does not is therefore either (1/1,100,000) x (1/1,100,000) (if both men are innocent) or 1/1,100,000 x 1 (if Puckett is innocent and the other man is the killer). In other words, the probability that Puckett matches just by coincidence (he matches if he is innocent) in a search of a database of size 2 is, at most, 1 in 1.1 million. Searching the database and finding that only Puckett matches is better evidence than testing only Puckett.  This reasoning is developed more fully, for a database of any size, in, e.g., David H. Kaye, Rounding Up the Usual Suspects: A Legal and Logical Analysis of DNA Database Trawls, 87 N. Car. L. Rev. 425 (2009).

Definition 3. The probability of a coincidental match means the chance that one or more DNA profiles in the database would match if no one in the database is the killer.

This definition refers to the probability of one or more hits in the database given that the database is innocent. This probability is approximately 1 in 3. What it has to do with the probability that the DNA in the bedroom was Mr. Puckett’s is obscure.  It is not even the expected rate at which searches of innocent databases would lead to prosecutions. After all, the 1 in 3 figure includes people who were not even born in 1972, when Puckett allegedly killed Diane Sylvester. If the probability that applies under Definition 3 were to be admitted, it should be adjusted so that it it is not so misleadingly large. See id.; David H. Kaye, People v. Nelson: A Tale of Two Statistics, 7 L., Probability, & Risk 247 (2008).

The Birthday Problem

Also contrary to the claim in the California Lawyer, the birthday problem is not involved in Puckett. The birthday problem, in its simplest form, asks for is the smallest number of people in a room such that the probability that at least two of them will have birthdays on the same day of the same month exceeds one-half. The answer (23) is surprisingly small because no particular birthday is specified. In the Puckett search, however, a particular DNA profile — the one from the crime-scene — is specified. Finding that this particular profile matches at least one in the database is much less likely than finding at least one match between all pairs of profiles in the database. The latter event is the kind that is at issue in the birthday problem.  See David H. Kaye, DNA Database Woes: What Is the FBI Afraid Of?, Cornell J. L. & Public Policy (2010, in press). It is not involved in a cold hit to a crime-scene profile.

There are other errors in the California Lawyer article, but I hope I have said enough to caution readers to be wary. The media portrait of the database-trawl issue bears but a faint resemblance to the statistical literature on the subject.

II. The LA Times‘s Gaffe

On March 4, 2008, the Los Angeles Times published “When a Match is Far from a Lock,” an account of the perceived need to adjust the probability for a random match when an individual emerges as a suspect because of a trawl through a database of DNA profiles. The reporters suggested that there was a grave injustice because “the prosecutor told the jury that the chance of such a coincidence was 1 in 1.1 million,” but “jurors were not told the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett’s case, it was 1 in 3.” They added that “the case is emblematic of a national problem,” announcing that “The Times has found [that p]rosecutors and crime labs across the country routinely use numbers that exaggerate the significance of DNA matches in ‘cold hit’ cases, in which a suspect is identified through a database search.”

The Times received some flak for this breathless reporting. Not only do many leading statisticians dispute the claim that an adjustment for the size of the database searched produces the most significant statistic, but, it was said, the description of “1 in 3” as “the probability that the database had hit upon an innocent person” was wrong. The critical readers complained that, at best, 1/3 was the chance of a match to someone in the database if neither Puckett nor anyone else in the database were the source of the DNA in the bedroom of the murdered woman. It is not the chance that Puckett is not the source given that his DNA matches.

To equate the two probabilities is to slip into the transposition fallacy that P(A given B) = P(B given A). Conditional probabilities do not work this way. For instance, the chance that a card randomly drawn from a deck of ordinary playing cards is a picture card given that it is red is not the chance that it is red given that it is a picture card. The former probability is P(picture if red) = 6/26. The latter is P(red if picture) = 6/12.

The reporters responded with the following defense:

In our story, we did not write that there was a 1 in 3 chance that Puckett was innocent, which would be a clear example of the prosecutor’s fallacy. Rather, we wrote: “Jurors were not told, however, the statistic that leading scientists consider the most significant: the probability that the database search had hit upon an innocent person. In Puckett’s case, it was 1 in 3.” The difference is subtle, but real.

Interestingly, when asked whether there was any difference on a listserve of evidence professors, two professors described the statement as ambiguous, while four saw it as a clear instance of transposition.

My view is that the following two statements are true:

1. IF THE DATABASE WERE INNOCENT (meaning that it does not contain the source of the crime-scene DNA and everyone in it is unrelated), then (prior to the trawl) the probability that SOMEONE (regardless of his or her name) would match is roughly 1/3.

2. IF THE DATABASE WERE INNOCENT, then (prior to the trawl) the probability that a man named Puckett would match is 1/N = 1/1,100,000.

But neither (1) nor (2) is equivalent to

3. The probability that the database search hit upon an innocent person named Puckett was 1/3.

Yet, it seems that reporters Jason Felch and Maura Dolan told at least one juror who had convicted Puckett that he had done so even though the probability was as high as 1 in 3 that the cold hit was to an innocent person named Puckett. The juror responded predictably to this distressing news: “Of course it would have changed things. It would have changed a lot of things.” Perhaps someone should debrief the juror and tell him precisely what the 1/3 figure refers to.

References

Edward Hume, Guilt by the Numbers: How fuzzy is the math that makes DNA evidence look so compelling to jurors?, California Lawyer, Apr. 2009, at 21-24.

Donnelly, Peter, and Richard D. Friedman. 1999. “DNA Database Searches and the Legal Consumption of Scientific Evidence.” Michigan Law Review 97: 931-984.

Kaye, Jane. 2006. “Police Collection and Access to DNA Samples.” Genomics, Society and Policy. 2: 16-27.

David H. Kaye, People v. Nelson: A Tale of Two Statistics, 7 L., Probability, & Risk 247 (2008)

David H. Kaye, Rounding Up the Usual Suspects: A Legal and Logical Analysis of DNA Database Trawls, 87 N. Car. L. Rev. 425 (2009)

The New Year’s Best and Worst Book Buys

Since January is the month in which The Double Helix and the Law of Evidence is scheduled for publication, I thought I would see where it is selling and publicize any bargains. The list price is $45, but it is already discounted — and inflated!

The cheapest price I saw is $36 at Barnes and Noble. Amazon.com is very competitive, with today’s price being $36.67. That’s the good news.

Here’s the rogue’s gallery:

  • Books-A-Million claims that the retail price is $49.50. Club members get 10% off this special price.
  • Powell’s Books sells it for more, though: $56.95.
  • Biggerbooks.com quotes “List Price $59.99, Our Price $57.32, You save $2.67.”
  • Borders lists the book as one of the “Top 5 Forensic Science Books.” How’s that for instant success? Surely, it justifies the amazing price of $115.95. But no, the price is in Australian dollars. The U.S. store does not even carry it. So much for the Top 5.