Mr. Ruskin’s blog post calls attention to the important problem of access to research data in litigation and other contexts.  The effort to obtain Dr. Racette’s underlying data is an interesting case study in these legal discovery battles.  Ruskin notes that there is the potential for “injustice” from such discovery, but he fails to acknowledge that the National Research Council has been urging scientists for decades to have a plan for data sharing as part of their protocol, and that the National Institutes of Health now requires such planning.  Some journals require a commitment to data sharing as a condition to publication.  The Annals of Internal Medicine, which is probably the most rigorously edited internal medicine journal, requires authors to state to what extent they will share data when their articles appear in print. Ultimately, litigants are entitled to “everyman’s” and “every woman’s” evidence, regardless whether they are scientists. 

In the case of Dr. Racette, it was clear that the time he needed to spend to respond to defense counsel’s subpoena was largely caused by his failure to comply with guidelines and best practices of the NIH on data sharing.  Racette was represented by university counsel, who refused to negotiate over the subpoena, and raised frivolous objections. Ultimately, these costs were visited upon the defendants who paid what seemed like rather exorbitant amounts for Racette and his colleagues to redact individual identifier information.  The MDL court suggested that Racette was operating independently of plaintiffs’ counsel, but the fact was that plaintiffs’ counsel recruited the study participants and brought them to the screenings, where Racette and colleagues videotaped them to make their assessments of Parkinsonism.  Much more could be said but for a protective order that was put in place by the MDL court.  What I can say is that after the defense obtained a good part of the underlying data, the Racette study was no longer actively used by plaintiffs’ counsel in the welding fume cases.  

It is not only litigation that gives rise to needs for transparency and openness. Regulation and public policy disputes similarly create need for data access.  As Mr. Ruskin acknowledges, the case of Weitz & Luxenberg v. Georgia-Pacific LLC, is very different, but at bottom is the same secrecy and false sense of entitlement to privilege underlying data. The Appellate Division’s invocation of the crime-fraud exception seems to be hyperbolic precisely because no attorney-client privilege attached in the first place.  

The Georgia-Pacific effort was misguided on many levels, but we should at least rejoice that science won, and that G-P will be required to share underlying data with plaintiffs’ counsel. Without reviewing the underlying data and documents, it is hard to say what the studies were designed to do, but saying that they were designed “to cast doubt” is uncharitable to G-P. After all, G-P may well have found itself responding in court to some rather dodgy data, and thought it could sponsor stronger studies that were likely to refute the published papers.  And the published papers may have been undertaken to “cast certainty” over issues that were not what they were portrayed to be in the papers.

Bookmark and Share

 

Litigation Funding

Posted on May 9, 2012 04:16 by Nathan A. Schachtman

An internet search on the phrase "litigation funding" returns thousands of hits.  There are an incredible number of companies and persons "out there" who will buy equity shares in a lawsuit.  Hedge funds are actively seeking opportunities to invest in lawsuits.

Putting aside the concerns about champerty and maintenance, I wonder whether defense counsel are doing enough to work on this issue in trials.  Assuming that these websites really are engaged in the practice they describe, shouldn't defense counsel include questions related to investments in lawsuits, in their voir dire of the jury panel?

Obviously if potential jurors owned stock in the defendant company, they would be disqualified.  Equity ownership in a chose in action surely is relevant to counsel's evaluation of a prospective juror's impartiality.  Even if the prospective juror is not invested in this particular lawsuit, the question is important.  If the investment is in the litigation generally, or in other plaintiffs' cases within the same litigation, then a jury verdict in favor of the plaintiff would likely benefit the juror by increasing the settlement value of the other cases.  Even investments in unrelated personal injury litigation, the investments have the potential to prejudice the juror against the defense.  For instance, if the juror has investments in another personal injury litigation, returning a large verdict in the present case could benefit the investment by making the company defending against the juror's chose in action believe that trying cases in the particular venue was too dangerous to risk, and those claims should be settled.

Certainly, the existence and extent of investment by others in a lawsuit should be a worthy line of discovery to conduct in mass tort litigation.

Are we doing enough to stop this insanity?

This article was first posted at the author's website on May 8th, 2012: http://schachtmanlaw.com/litigation-funding/

Bookmark and Share

Categories: Jury Selection | Torts

Actions: E-mail | Comments

 

In “RULE OF EVIDENCE 703 — Problem Child of Article VII (Sept. 19, 2011),” I wrote about how Federal Rule of Evidence 703 is generally ignored and misunderstood in current federal practice.  The Supreme Court, in deciding Daubert, shifted the focus to Rule 702, as the primary tool to deploy in admitting, as well as limiting and excluding, expert witness opinion testimony.  The Court’s decision, however, did not erase the need for an additional, independent rule to control the quality of inadmissible materials upon which expert witnesses rely.  Indeed, Rule 702 as amended in 2000, incorporated much of the learning of the Daubert decision, and then some, but it does not address the starting place of any scientific opinion:  the data, the analyses (usually statistical) of data, and the reasonableness of relying upon those data and analyses.  Instead, Rule 702 asks whether the proffered testimony is based upon:

1. sufficient facts or data,
2. the product of reliable principles and methods, and
3. a reliable application of principles and methods to the facts of the case

Noticeably absent from Rule 702, in its current form, is any directive to determine whether the proffered expert witness opinion is based upon facts or data of the sort upon which experts in the pertinent field would reasonably rely.  Furthermore,  Daubert did not address the fulsome importation and disclosure of untrustworthy hearsay opinions through Rule 703.  See Problem Child (discussing the courts’ failure to appreciate the structure of peer-reviewed articles, and the need to ignore the discussion and introduction sections of such articles as often containing speculative opinions and comments).  See also Luciana B. Sollaci & Mauricio G. Pereira, “The introduction, methods, results, and discussion (IMRAD) structure: a fifty-year survey,” 92 J. Med. Libr. Ass’n 364 (2004); Montori, et al., “Users’ guide to detecting misleading claims in clinical research reports,” 329 Br. Med. J. 1093, 1093 (2004) (advising readers on how to avoid being misled by published literature, and counseling readers to “Read only the Methods and Results sections; bypass the Discuss section.”)  (emphasis added).

Given this background, it is disappointing but not surprising that the new Reference Manual on Scientific Evidence severely slights Rule 703.  Using either a word search in the PDF version or the index at end of book tells the story:  There are five references to Rule 703 in the entire RMSE!  The statistics chapter has an appropriate but fleeting reference:

“Or the study might rest on data of the type not reasonably relied on by statisticians or substantive experts and hence run afoul of Federal Rule of Evidence 703. Often, however, the battle over statistical evidence concerns weight or sufficiency rather than admissibility.”

RMSE 3d at 214. At least this chapter acknowledges, however briefly, the potential problem that Rule 703 poses for expert witnesses.  The chapter on survey research similarly discusses how the data collected in a survey may “run afoul” of Rule 703.  RMSE 3d at 361, 363-364.

The chapter on epidemiology takes a different approach by interpreting Rule 703 as a rule of admissibility of evidence:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible,184 as it tends to make an issue in dispute more or less likely.185"

Id. at 610.  This view is mistaken.  Sufficient rigor in an epidemiologic study is certainly needed for reliance by an expert witness, but such rigor does not make the study itself admissible; the rigor simply permits the expert witness to rely upon a study that is typically several layers of inadmissible hearsay.  See “Reference Manual on Scientific Evidence v3.0 – Disregarding Study Validity in Favor of the “Whole Gamish” (Oct. 14, 2011) (discussing the argument put forward by the epidemiology chapter for considering Rule 703 as an exception to the rule against hearsay).

While the treatment of Rule 703 in the epidemiology chapter is troubling, the introductory chapter on the admissibility of expert witness opinion testimony by the late Professor Margaret Berger really sets the tone and approach for the entire volume. See Berger, “The Admissibility of Expert Testimony,” RSME 3d 11 (2011).  Professor Berger never mentions Rule 703 at all!  Gone and forgotten. The omission is not, however, an oversight.  Rule 703, with its requirement of qualifying each study relied upon as having been “reasonably relied upon,” as measured by what experts in the appropriate discipline, is the refutation of Berger’s argument that somehow a pile of weak, flawed studies, taken together can yield a scientifically reliable conclusion. See “Whole Gamish,” (Oct. 14th, 2011).

Rule 703 is not merely an invitation to trial judges; it is a requirement to look at the discrete studies relied upon to determine whether the building blocks are sound.  Only then can the methods and procedures of science begin to analyze the entire evidentiary display to yield reliable scientific opinions and conclusions.


The author, Nathan A. Schachtman, is in private practice in New York City, and is a lecturer-in-law at the Columbia Law School.  He keeps a web log of musings on tort and evidence law at his website: schachtmanlaw.com

Bookmark and Share

 

(originally posted at Tortini <http://schachtmanlaw.com/reference-manual-on-scientific-evidence-v3-0-disregarding-study-validity-in-favor-of-the-whole-gamish/> on October 14, 2011.)

There is much to digest in the new Reference Manual on Scientific Evidence, third edition (RMSE 3d).  Much of what is covered is solid information on the individual scientific and technical disciplines covered.  Although the information is easily available from other sources, there is some value in collecting the material in a single volume for the convenience of judges.  Of course, given that this information is provided to judges from an ostensibly neutral, credible source, lawyers will naturally focus on what is doubtful or controversial in the RMSE.

I have already noted some preliminary concerns, however, with some of the comments in the Preface, by Judge Kessler and Dr. Kassirer.  See “New Reference Manual’s Uneven Treatment of Conflicts of Interest.”  In addition, there is a good deal of overlap among the chapters on statistics, epidemiology, and medical testimony.  This overlap is at first blush troubling because the RMSE has the potential to confuse and obscure issues by having multiple authors address them inconsistently.  This is an area where reviewers should pay close attention.

From first looks at the RMSE 3d, there is a good deal of equivocation between encouraging judges to look at scientific validity, and discouraging them any meaningful analysis by emphasizing inaccurate proxies for validity, such as conflicts of interest.  (As I have pointed out, the new RSME did not do quite so well in addressing its own conflicts of interest.  See “Toxicology for Judges – The New Reference Manual on Scientific Evidence (2011).”) The strengths of the chapter on statistical evidence, updated from the second edition, remain, as do some of the strengths and flaws of the chapter on epidemiology.  I hope to write more about each of these important chapters at a later date.

The late Professor Margaret Berger has an updated version of her chapter from the second edition, “The Admissibility of Expert Testimony,” RSME 3d 11 (2011).  Berger’s chapter has a section criticizing “atomization,” a process she describes pejoratively as a “slicing-and-dicing” approach.  Id. at 19.  Drawing on the publications of Daubert-critic Susan Haack, Berger rejects the notion that courts should examine the reliability of each study independently. Id. at 20 & n. 51 (citing Susan Haack, “An Epistemologist in the Bramble-Bush: At the Supreme Court with Mr. Joiner,” 26 J. Health Pol. Pol’y & L. 217–37 (1999).  Berger contends that the “proper” scientific method, as evidenced by works of the International Agency for Research on Cancer, the Institute of Medicine, the National Institute of Health, the National Research Council, and the National Institute for Environmental Health Sciences, “is to consider all the relevant available scientific evidence, taken as a whole, to determine which conclusion or hypothesis regarding a causal claim is best supported by the body of evidence.” Id. at 19-20 & n.52.  This contention, however, is profoundly misleading.  Of course, scientists undertaking a systematic review should identify all the relevant studies, but some of the “relevant” studies may well be insufficiently reliable (because of internal or external validity issues) to answer the research question at hand. All the cited agencies, and other research organizations and researchers, exclude studies that are fundamentally flawed, whether as a result of bias, confounding, erroneous data analyses, or related problems.  Berger cites no support for the remarkable suggestion that scientists do not make “reliability” judgments about available studies when assessing the “totality of the evidence.”

Professor Berger, who had a distinguished career as a law professor and evidence scholar, died in November 2010.  She was no friend of Daubert, but remarkably her antipathy has outlived her.  Her critical discussion of “atomization” cites the notorious decision in Milward v. Acuity Specialty Products Group, Inc., 639 F.3d 11, 26 (1st Cir. 2011), which was decided four months after her passing. Id. at 20 n.51. (The editors note that the published chapter was Berger’s last revision, with “a few edits to respond to suggestions by reviewers.”)

Professor Berger’s contention about the need to avoid assessments of individual studies in favor of the whole gamish must also be rejected because Federal Rule of Evidence 703 requires that each study considered by an expert witness “qualify” for reasonable reliance by virtue of the study’s containing facts or data that are “of a type reasonably relied upon by experts in the particular field forming opinions or inferences upon the subject.”  One of the deeply troubling aspects of the Milward decision is that it reversed the trial court’s sensible decision to exclude a toxicologist, Dr. Martyn Smith, who outran his headlights on issues having to do with a field in which he was clearly inexperienced – epidemiology.

Scientific studies, and especially epidemiologic studies, involve multiple levels of hearsay.  A typical epidemiologic study may contain hearsay leaps from patient to clinician, to laboratory technicians, to specialists interpreting test results, back to the clinician for a diagnosis, to a nosologist for disease coding, to a national or hospital database, to a researcher querying the database, to a statistician analyzing the data, to a manuscript that details data, analyses, and results, to editors and peer reviewers, back to study authors, and on to publication.  Those leaps do not mean that the final results are untrustworthy, only that the study itself is not likely admissible in evidence.

The inadmissibility of scientific studies is not problematic because Rule 703 permits testifying expert witnesses to formulate opinions based upon facts and data, which are not themselves admissible in evidence. The distinction between relied upon, and admissible, studies is codified in the Federal Rules of Evidence, and in virtually every state’s evidence law.

Referring to studies, without qualification, as admissible in themselves is wrong as a matter of evidence law.  The error has the potential to encourage carelessness in gatekeeping expert witnesses’ opinions for their reliance upon inadmissible studies.  The error is doubly wrong if this approach to expert witness gatekeeping is taken as license to permit expert witnesses to rely upon any marginally relevant study of their choosing.  It is therefore disconcerting that the new Reference Manual on Science Evidence (RMSE 3d) fails to make the appropriate distinction between admissibility of studies and admissibility of expert witness opinion that has reasonably relied upon appropriate studies.

Consider the following statement from the chapter on epidemiology:

“An epidemiologic study that is sufficiently rigorous to justify a conclusion that it is scientifically valid should be admissible,184 as it tends to make an issue in dispute more or less likely.185"

RMSE 3d at 610.  Curiously, the authors of this chapter have ignored Professor Berger’s caution against slicing and dicing, and speak to a single study’s ability to justify a conclusion. The authors of the epidemiology chapter seem to be stressing that scientifically valid studies should be admissible.  The footnote emphasizes the point:

See DeLuca v. Merrell Dow Pharms., Inc., 911 F.2d 941, 958 (3d Cir. 1990); cf. Kehm v. Procter & Gamble Co., 580 F. Supp. 890, 902 (N.D. Iowa 1982) (“These [epidemiologic] studies were highly probative on the issue of causation—they all concluded that an association between tampon use and menstrually related TSS [toxic shock syndrome] cases exists.”), aff’d, 724 F.2d 613 (8th Cir. 1984). Hearsay concerns may limit the independent admissibility of the study, but the study could be relied on by an expert in forming an opinion and may be admissible pursuant to Fed. R. Evid. 703 as part of the underlying facts or data relied on by the expert. In Ellis v. International Playtex, Inc., 745 F.2d 292, 303 (4th Cir. 1984), the court concluded that certain epidemiologic studies were admissible despite criticism of the methodology used in the studies. The court held that the claims of bias went to the studies’ weight rather than their admissibility. Cf. Christophersen v. Allied-Signal Corp., 939 F.2d 1106, 1109 (5th Cir. 1991) (“As a general rule, questions relating to the bases and sources of an expert’s opinion affect the weight to be assigned that opinion rather than its admissibility. . . .”).”

RMSE 3d at 610 n.184 (emphasis in bold, added).  This statement, that studies relied upon by an expert in forming an opinion may be admissible pursuant to Rule 703, is unsupported by Rule 703 and the overwhelming weight of case law interpreting and applying the rule.  (Interestingly, the authors of this chapter seem to abandon their suggestion that studies relied upon “might qualify for the learned treatise exception to the hearsay rule, Fed. R. Evid. 803(18), or possibly the catchall exceptions, Fed. R. Evid. 803(24) & 804(5),” which was part of their argument in the Second Edition of the RMSE.  RMSE 2d at 335 (2000).)  See also RMSE 3d at 214 (discussing statistical studies as generally “admissible,” but acknowledging that admissibility may be no more than permission to explain the basis for an expert’s opinion).

The cases cited by the epidemiology chapter, Kehm and Ellis, both involved “factual findings” in public investigative or evaluative reports, which were independently admissible under Federal Rule of Evidence 803(8)(C).  See Ellis, 745 F.2d at 299-303; Kehm, 724 F.2d at 617-18.  As such, the cases hardly support the chapter’s suggestion that Rule 703 is a rule of admissibility for epidemiologic studies.

Here the RMSE, in one sentence, confuses Rule 703 with an exception to the rule against hearsay, which would prevent the statistical studies from being received in evidence.  The point is reasonably clear, however, that the studies “may be offered” to explain an expert witness’s opinion.  Under Rule 705, that offer may also be refused. The offer, however, is to “explain,” not to have the studies admitted in evidence.

The RMSE is certainly not alone in advancing this notion that studies are themselves admissible.  Other well-respected evidence scholars lapse into this position:

“Well conducted studies are uniformly admitted.”

David L. Faigman, et al., Modern Scientific Evidence:  The Law and Science of Expert Testimony v.1, § 23:1,at 206 (2009)

Evidence scholars should not conflate admissibility of the epidemiologic (or other) studies with the ability of an expert witness to advert to a study to explain his or her opinion.  The testifying expert witness really has no need to become a conduit for off-hand comments and opinions in the introduction or discussion section of relied upon articles, and the wholesale admission of such hearsay opinions undermines the court’s control over opinion evidence.  Rule 703 authorizes reasonable reliance upon “facts and data,” not every opinion that creeps into the published literature.

Nathan Schachtman is in private practice in New York City, and is a lecturer-in-law at the Columbia University Law School.

Bookmark and Share

 
 

Submit Blog

If you wish to submit a blog posting for DRI Today, send an email to today@dri.org with "Blog Post" in the subject line. Please include article title and any tags you would like to use for the post.
 
 
 

Search Blog


Recent Posts

Categories

Authors

Blogroll



Staff Login