Cited: 66 Law & Contemp. Probs. 63
[*pg 63]




   A. The Problems Science Presents to Environmental Regulators
   B. Enter the Good-Science Reforms

   A. In Search of Bad Agency Science
   B. Institutional Checks and Balances
   C. Generous Mandates and Loads of Discretion

   A. The Agency Science Problems Might Be Made Worse Rather than Better
   B. Harm to Administrative Processes
   C. Summary

   A. The Rebuttal Problem
   B. Institutional Adjustments to Counteract the Rebuttal Problem
   C. Reform





Imagine a world in which the proponent of a legal reform bears the burden of showing that a problem exists. Ideally, the reformer would also be expected to produce some evidence that the reform would actually address the identified problem and would not, on balance, make matters worse. In that world, there would be no Shelby Amendment1 and no Data Quality Act,2 and professors would not meet to discuss the merits of "Daubertizing" agency science.3 None of these three "good science" reforms are supported by meaningful evidence that the purported problem -- "bad science," or more precisely, science that is methodologically unsound -- occurs with any regularity in administrative decisionmaking. The proponents of these reforms are even more hard-pressed to establish that their reforms will not make the science-related problems in public health and environmental regulation worse rather than better. Indeed, existing evidence suggests the former is more likely to be the case.

In this imaginary world in which legal reformers must satisfy minimal, logical prerequisites, this suite of good-science reforms -- much less the administrative resources needed to implement them -- cannot be justified. While the science underlying existing regulations is hardly perfect, it is far better than it will be once the reform proposals are implemented.

This Article argues that the good-science reforms miss the mark and have the potential to cause significant damage to already crippled administrative processes. Part II presents background information relating to the sources of dissatisfaction with regulatory science and how the three most popular reforms [*pg 64] purport to address these concerns. Part III details the absence of any compelling justification for the reforms, and Part IV discusses why they will be detrimental to agency processes and to the cause of good science. The final part considers the "rebuttal problem" -- another way to frame the problem underlying the consternation with the way science is incorporated into regulatory policy. This reframing captures the real dissatisfaction with agency rulemaking, which lies not with the science, but with the absence of clear policy direction as to when a body of science sufficiently rebuts a protective assumption and permits a less stringent standard. Addressing this policy gap will accomplish far more than any of the good-science proposals.



Science teases policymakers with the prospect of providing definitive guidance for regulatory decisionmaking. But in reality, the information that most scientific research provides to health and environmental regulation is incomplete and inconclusive, both in identifying and in quantifying the risks that these hazards pose. This Part considers the ways in which science disappoints and complicates regulatory decisionmaking and identifies the ways in which the good-science reforms hope to overcome these science-based complications.

A. The Problems Science Presents to Environmental Regulators

Since science is a necessary but not sufficient ingredient for environmental decisionmaking, it is important to locate "good science" concerns on the larger map of regulatory science problems. "Science" is, according to the Supreme Court, knowledge that can be supported by a "scientifically valid . . . reasoning or methodology."4 This generally requires a hypothesis capable of being tested in a replicable way or the use of methods that scientists have generally accepted as valid.5 Extrapolation from one or more validated scientific studies to a larger policy question -- for example, the appropriate standard for a particular pollutant -- is typically not science, since such "weight of the evidence" judgments cannot be supported by a testable method. Instead, these larger risk assessments combine scientific knowledge (for example, toxicity tests on animals) with science-policy judgments (for example, dose-response assumptions).6 [*pg 65] Accordingly, most decisions in public health and environmental regulation break down into a series of sub-decisions that alternate or zigzag between science and science-policy.

Figure 1:


The zigzag of science and science-policy in public health and environmental regulation

Consider, for example, a common issue addressed by regulatory agencies: the appropriate protective standard for a particular pollutant in a specific medium, such as drinking water. The first, overarching question an agency must address when faced with this decision is the amount of human risk the agency (or, often, Congress) is willing to tolerate, a question that is safely outside the realm of pure science. After deciding on a protective goal of, for example, tolerating only one cancer out of every one million persons exposed, the agency's analysis breaks down into a series of discrete questions. Some of these questions can be answered by science alone. Some, however, cannot, and the most that science can do is to point to several of the most plausible options.

Science can, for example, identify the level of high-dose exposure that produces tumors in fifty percent of mice exposed to the pollutant (Q2), but it leaves unanswered how regulators should extrapolate from this type of study to humans. In the absence of definitive human testing, science cannot answer, for example, whether all tumors or only those that are malignant should be counted (Q1), or how to extrapolate from high to low does (Q3) or from animal to man (Q4).7

Based on this conceptualization of regulatory decisions as involving a zigzag between science and science-policy questions, we can expect at least three dis[*pg 66] tinct problems to arise in the incorporation of science into regulation. First, the scientific studies on the top half of the zigzag might be of poor quality ("bad science"), or the agency might not include the best science in its analyses.8 "Bad science" can result from a variety of imperfections in research, from the blatant falsification of data on one end, to imperceptible researcher bias or incompetence on the other.9 Problems with the quality of science underlying regulations arise if an agency weights these low-quality studies too heavily or ignores or gives insufficient credence to high quality research.10

The second regulatory science problem, transparency, arises from an agency's failure to explicitly identify the separate roles scientific research and value choices play in reaching a final regulatory decision -- that is, failure to identify the specific contours of the science-policy zigzag. The administrative system, which includes judicial review, is grounded in a commitment to provide the public, interest groups, congressional officials, and the President and his staff with an accessible and understandable explanation for regulatory decisions.11 Since the zigzag nature of science and science policy makes it easy to blur the respective roles of science and policy in regulatory decisionmaking, these political checks and balances can be lost or at least impeded by the complex interweaving of technical and value decisions.12 Agencies, as technical experts, can even hide controversial policy judgments by failing to delineate the aspects of their decisions determined by science and those determined by value choices.13

[*pg 67]

The third regulatory science problem relates to the production of scientific research, which plays a vital role in anchoring regulation by reducing uncertainties. Since science provides information that is vital to environmental and public health regulations, the regulatory system should not only make the most of available science, but should encourage its production. In fact, however, the production of relevant scientific information during the lifetime of environmental and public health regulation has been unimpressive.14 Agencies have not established coherent research priorities.15 At the same time, an agency's regulatory presence discourages the private production of this same scientific information because, from an industry's perspective, producing research on adverse health or environmental effects is as likely to lead to greater regulation as it is to lead to less regulation.16 Consequently, while there are limits to how scientific information can be used to inform environmental policymaking, even the limited contributions science can make to assessing health and environmental risks are often unavailable.17

B. Enter the Good-Science Reforms

The good-science reforms generally purport to target only the first regulatory science problem described above: the quality of the scientific research underlying regulation. Each of the three reforms (the Shelby Amendment, the Data Quality Act, and regulatory Daubert) attempt in different ways to expand peer review and oversight processes to improve the checks on what is feared to be regulatory research of poor quality.18

[*pg 68]

The first good-science reform, the Data Access Amendment (also known as the Shelby Amendment), provides regulatory participants with access to the data underlying studies produced with federal money.19 Although passed as a rider with little legislative history or debate,20 this reform appears to be intended to enhance the quality of regulatory science by providing regulatory watchdogs with data they can use to validate the conclusions of researchers.21 The proposed data access guidelines generated a great deal of controversy within the scientific community, and over 12,000 comments were submitted.22 In response to these comments, the Office of Management and Budget ("OMB") provided additional protections for researchers in the final 1999 Guidelines, including limiting access only to the data underlying "published" studies, providing researchers with a "reasonable amount of time" to assemble the requested data, and providing mechanisms for agencies or researchers to recover reasonable costs associated with responding to information requests.23

The second reform, the Data Quality Act, was similarly passed as an appropriations rider without hearings or public debate.24 The Data Quality Act [*pg 69] ("DQA") requires agencies to establish a process by which parties can lodge petitions for the correction of information, including scientific studies, disseminated by the agency.25 This petition process places interested parties in the role of peer reviewer. They can allege, through a formal process, that a study should be excluded from regulatory decisionmaking because it is too unreliable to be useful, an allegation taken more seriously if the study plays an "influential" role in a policy decision.26 Disgruntled complainants whose requests for correction are denied can file an appeal with the agency.27 It is also expected that complaints and requests for correction will carry some weight if an agency action is challenged in court and that they might even be appealable in and of themselves under limited circumstances.28

[*pg 70]

The third reform, regulatory Daubert, which is still in the proposal stage, appears to expand peer review beyond that provided by the Data Access Amendment and the Data Quality Act by employing the courts, rather than agencies, to preside over disputes concerning the quality of regulatory science.29 The regulatory-Daubert proposal appears to entrust appellate judges with the gatekeeping responsibility of determining de novo whether a study is sufficiently reliable to be included within a larger regulatory model or decisionmaking context.30

A final set of recent developments are more ad hoc, but nevertheless deserve mention because they also arose as apparent reactions to concerns that "bad science" was being used by the Environmental Protection Agency [*pg 71] ("EPA"). Both Chlorine Chemistry Council v. EPA,31 in which the D.C. Circuit remanded EPA's chlorine standard for drinking water, and the Bush Administration's temporary suspension of the arsenic standard for drinking water32 suggest shifts towards tolerating greater risks than EPA is willing to accept. But they are also framed as agency failures to use "good science."33 While these reforms are too ad hoc to provide much material for the analysis presented in Parts III and IV below, they will form an important part of the discussion presented in Part V of the real problem with the role of science in regulatory policy.



Intelligent efforts to reform regulation anticipate how those efforts might be distorted by agency over-reaction, judicial overreaching, or imbalances in the resources of various interest groups.34 Complex administrative forces can transform even simple and sensible reform proposals into hopeless bureaucratic processes that lead to results quite different from what is expected.35

Before tinkering with agency processes, then, it is imperative that we do our homework. First, we must describe with some precision the problem that is being addressed. Taking this simple step will ensure that there are real benefits to be gained that outweigh the risks associated with disturbing the status quo. [*pg 72] Second, we must anticipate how reform proposals will fare once introduced into the complex regulatory system.

Despite the potential intrusiveness of the good-science proposals, however, little of this homework has been done. There have been only a few general references and anecdotal accounts of agencies using or producing bad science in the area of environmental and public health regulation. Moreover, these cumulative accounts provide, at best, only sparse evidence of a problem and offer no evidence that the problem is a significant one.36 Still more troubling, virtually no attention has been given to tracing these proposals through the agencies to determine the types of unintended administrative reactions they are likely to produce.

The next two sections consider these questions, even though the recent

implementation of two of the three reforms under discussion might make such an ex post analysis academic.

A. In Search of Bad Agency Science

After more than thirty years of vigorous public health and safety regulation, it seems almost inevitable that an agency would have relied upon a scientific study that ultimately proved unreliable. Yet, despite the thousands of public health and safety regulations promulgated annually,37 there are surprisingly few examples of EPA using unreliable science or using science inappropriately to support a final regulation.38 This assessment might be far different if one con-[*pg 73] sidered the larger universe of regulatory decisions involving the grant of permits and licenses, since these decisions rest in large part on unvalidated industry science. Indeed, to the extent that a "bad science" problem has been identified in regulatory decisionmaking, it concerns the quality of science produced by regulated parties for purposes of regulation.39 Once this private science is excluded, the examples of regulatory bad science are winnowed down to a few, virtually all of which are contested.40

Commissioned expert reports provide the most credible resource for assessing the quality of EPA's science. These reports identify very few problems with the quality of the agency's science, and the problems that are identified are relatively minor. In the first major report on the topic, Safeguarding the Future, commissioned by EPA in 1992, the expert panel found that EPA was perceived to fit science to policy, but the experts were notably silent as to whether this perception was accurate.41 Most of the findings and recommendations contained in the report urged EPA to do more to validate its analytical models and assumptions, strengthen its peer-review system, and increase the number of capable scientists employed at the agency, although EPA's shortcomings in these areas did not appear significant.42 Although the panel did find that scientific advice should be integrated earlier and more often into EPA decisionmaking, none of the eleven major findings of the study suggested that the [*pg 74] agency used bad science in its decisionmaking.43 Eight years later, the National Academy of Sciences ("NAS") released a report entitled Strengthening Science at the U.S. Environmental Protection Agency, which offered even more tepid criticisms of the quality of EPA's science.44 Much of the NAS report offered recommendations for how EPA could improve scientific leadership at the agency, enhance the production of information, and anticipate future environmental needs through scientific research.45 Only one section was dedicated to the quality of the science actually being relied upon, and in this section the authors complimented EPA for improvements to its peer-review system, but maintained that still more independence was needed between project managers and the peer-review process.46 One certainly cannot conclude from these reports that EPA produces or relies upon bad science to support its decisions.47

Congressional hearings and NAS workshops on the Data Quality Act also fail to provide examples of poor quality science emerging from the agency.48 The attachments in one congressional hearing, in fact, provide a veritable library of studies conducted on the quality and rigor of EPA's science practices.49 None of the several dozen reports in this two-volume, 2,400-page set [*pg 75] provide evidence of EPA using biased or bad science. To the extent that problems were found with EPA's science, they again arose in the management of scientific research, in EPA's strategic plans for that research (including the implementation of those plans), and in peer-review practices.50

A small yet robust literature by academics and policy analysts similarly suggests that there is little evidence of bad science used to support regulatory decisionmaking. In fact, some of these analysts conclude that agencies do a relatively competent job of assessing the methodological rigor of the science they incorporate into regulatory decisionmaking. Ted Greenwood, Sheila Jasanoff, Mark Powell,51 and Bruce Smith each conclude in four separate, book-length examinations of the science at EPA, the Occupational Safety and Health Administration ("OSHA"), and the Food and Drug Administration ("FDA") that these agencies are relatively adept at locating relevant science and evaluating the quality of the various scientific inputs in the course of their decisionmaking.52 While all four authors observed that the regulatory programs under [*pg 76] study were not free of problems, they did not identify the agencies' scientific competency as being among them.53

Most curious in a search for the bad-science problem is the failure of the reformers themselves to document a problem. The literature supporting the Data Access Amendment, both before and after passage, provides only a few, limited examples of problems with regulatory science or access to data.54 Discussions of the problems the Data Quality Act is intended to solve are sparser still: only a few secondary articles, all published after passage of the Act, provide examples of agencies relying on or disseminating poor-quality science.55 Regulatory-Daubert proponents similarly fail to produce evidence of the "bad science" problem. For example, in their thoughtful analysis in this Symposium, Alan Raul and Julie Zampa Dwyer identify the lack of transparency of agency policy judgments as the primary problem with science-based regulation. Concerns about the scientific quality of studies and analyses used in regulation, by contrast, are only weakly supported and seem secondary in their analysis.56

[*pg 77] Other parties committed to reforming the bad-science problem similarly take its existence for granted. One of the main proponents of both the Data Quality Act and the Data Access Amendment, the Center for Regulatory Effectiveness, provides virtually no evidence to support its assumption that a bad-science problem exists, although the Center does take issue with agency policy choices in extrapolating from science to regulatory policy.57 In its oversight of the rigor and cost-effectiveness of agency rulemakings, OMB has neglected to provide examples of agencies using bad science, although presumably the systematic collection of examples of this problem will be one of the benefits of the Data Quality Act.58 The only apparent effort to document an underlying problem with the quality of agency science comes from a series of sharp critiques posted on a "junk science" Internet site.59 Again, however, virtually all of the essays take issue with the agencies' protective policies (in the bottom half of the zigzag mapped at Figure 1), rather than with the quality of the research (in the upper half of the zigzag).60

[*pg 78]

[*pg 79]

Even the poster-child for the good-science reforms -- the "Six Cities" study conducted by Harvard's School of Public Health -- undermines, rather than supports, the need for comprehensive reform of the quality of agency science.61 The Six Cities study was very controversial among the regulated community because its findings regarding the risks posed by fine particulates were influential in EPA's decision to lower the particulate standard under the Clean Air Act.62 Industry requested the original data supporting the study, but the Harvard researchers refused because they were concerned that even the redacted data could be used to identify original study participants who were promised confidentiality.63 After considerable controversy, EPA hired the Health Effects Institute ("HEI") to perform a re-analysis of the study while keeping the data confidential. "The HEI found the original data to be of high quality, and essentially confirmed the validity of the original findings and conclusions."64

B. Institutional Checks and Balances

Of course, one need not insist upon tangible evidence of bad regulatory outcomes to be convinced that there is a problem with the quality of the science underlying regulation. Agencies might have numerous reasons to rely on weak or valueless studies to support regulation. For example, either low-level staff or micro-managing, high-level administrators with political objectives might have both the incentive and opportunity to commission or combine studies that lead to a predetermined result. Malaise and inattention might also cause agency staff to include in their analyses studies that are not sufficiently scrutinized. The sheer size of administrative records supporting many environmental and public health standards lends credence to this possibility.65

As discussed in detail below, however, it appears that even if these bureaucratic forces prevail at the expense of good science in some rulemakings, multiple internal and external checks on agency science provide reason for optimism that this abuse is not pervasive. These checks help correct deliberate and inadvertent errors and deter agencies from relying on science that is methodologically unsound. Such checks might explain why there are not more specific accounts of agencies abusing their technical powers by relying on bad science.

Science advisory boards and related expert panels provide the most obvious check on the unbridled freedom of agencies to promulgate regulations based on [*pg 80] bad science.66 Professor Sheila Jasanoff and Dr. Bruce Smith both provide a very detailed account of the involvement of scientific advisory boards in the agencies' use of science and conclude that this involvement, at least as of fifteen years ago, generally reinforces the agencies' scientific competency.67 The expert panel commissioned by EPA for the Safeguarding the Future report noticed the same positive effect of the Clean Air Scientific Advisory Committee ("CASAC"), created to review EPA's national ambient air quality standards [*pg 81] ("NAAQS").68 Science advisory boards, however, require time and resources,69 can intrude on value decisions that are properly reserved for Congress and the agencies (because of the blurred boundary between science and policy),70 and are not always representative of a cross-section of the scientific community.71 As a result, science advisory boards cannot be involved in every science-related agency action, although figures from the early 1990s suggest that EPA's Science Advisory Board conducted more than seventy reviews (of varying size) per year.72

Courts also provide valuable oversight of the quality of agency science through their review of rulemakings and by ensuring that interested parties have an opportunity to comment on proposed regulations.73 Although the [*pg 82] approach taken by the courts in resolving disputes over agency science is varied, the threat of a challenge and possible invalidation of a rule is ever-present.74 This threat presumably deters agencies from committing significant scientific errors.75 Indeed, it is plausible that agencies plan around a worst-case scenario for judicial review, rather than some "mean" level of judicial scrutiny.

Periods of embarrassing congressional oversight are also capable of encouraging agencies to get the science right the first time. These additional checks are more sporadic than science advisory boards and judicial review, but might nevertheless be important.76 Congress' frequent use of the NAS to review the work of agencies likely provides the most valuable congressionally imposed check on the quality of agency science, however.77

A fourth check against bad regulatory science is the hodge-podge of internal review mechanisms agencies have imposed on themselves.78 Prior to passage of [*pg 83] the Data Quality Act, EPA had already developed four separate programs dedicated to ensuring the quality of information relevant to regulation, including an electronic error correction system.79 Other agencies have similar programs in place.80 These formal mechanisms are supplemented by more informal bureaucratic forces that reward the use of good science and penalize agency officials for relying on science of low quality. For example, the diversity of goals and political affiliations of agency staff might create internal checks and balances that protect against the use of low quality scientific studies to advance political ends.81

[*pg 84]

Finally, scientific norms provide their own powerful check on the quality of regulatory science.82 "Pathbreaking claims, such as the discovery of high-temperature superconductivity or 'cold fusion,' generate immediate attempts to dissect and replicate the claimed results, both before and after formal publication. Deception and error, scientists plausibly argue, are virtually impossible to maintain in such an atmosphere."83 While only a few studies, such as the Six Cities Studies, might generate such a high level of interest among health and environmental scientists, these are precisely the studies most in need of validation.84 At the same time, professional rewards -- prestige and publications in prominent, peer-reviewed journals -- are generally meted out only when a scientist does good work.85 Evidence or even allegations that a scientist has violated a critical scientific norm can mark the end of his career.86 If anything such internal policing within the scientific community might be too inflexible and insular.87 Regardless, it is not surprising that the legal community ignores these professional norms and assumes that legally-mandated requirements are needed to ensure the quality of scientific research and science-based decisionmaking.88

[*pg 85]

C. Generous Mandates and Loads of Discretion

Perhaps most pertinent, but generally ignored in the assessment of the quality of agency science, is the limited extent to which agencies must produce scientific studies in the first place to justify regulatory programs. If definitive science is not required as a prerequisite for regulation, that might explain why agencies do not appear to be making significant errors in the science they do use.

Most of the protective statutes were passed with the explicit purpose of by-passing heavy burdens of proof and allowing agencies to regulate on the basis of limited scientific evidence.89 The possibility of certain catastrophic harms, such as mass exposures to toxic substances, makes this precautionary approach economically rational in many situations.90 The most familiar example of precautionary regulation is the Delaney Clause, which requires the Food and Drug Administration ("FDA") to ban a food or color additive if a single study shows that the additive causes cancer in animals.91 Many of the mandates governing EPA similarly require only limited scientific evidence to justify regulatory intervention, although they generally require more evidence than the Delaney Clause.92 Agencies, in other words, are expected to promulgate protective regulations without a great body of scientific information as a prerequisite.

Examples of this light-to-medium scientific burden for protective regulations run through many of the major environmental and public health statutes. Because Congress specified the hundreds of hazardous pollutants in need of regulation in the Clean Air and Clean Water Acts ("CAA" and "CWA," respectively), EPA needs no scientific evidence of harm and hence almost no scientific studies before promulgating technology-based standards under those statutes.93 Neither is a great body of scientific evidence needed to regulate non-hazardous pollutants under the CAA.94 Judge Williams of the D.C. Circuit has [*pg 86]in fact suggested that the statute presumes a protective zero-standard for non-threshold pollutants under section 109.95 Judge Williams presumably would have held similarly under the Safe Drinking Water Act challenge to EPA's chlorine "maximum contaminant level goals," except that the science was more comprehensive for chlorine and suggested that the presumptive zero-standard was rebutted.96 EPA, with approval from the D.C. Circuit, interprets the Federal Insecticide, Fungicide, and Rodenticide Act ("FIFRA") to allow uncertainties regarding the safety of a pesticide to be resolved against the pesticide, so that the agency can proceed to ban pesticides without definitive evidence of toxicity.97 Finally, the Resource Conservation and Recovery Act ("RCRA"),98 the Comprehensive Environmental Response, Compensation, and Liability Act ("CERCLA"),99 and the Environmental Planning and Community Right-to-Know Act ("EPCRA")100 require merely the presence of one of hundreds of nasty substances to trigger a rather overwhelming set of regulatory and liability requirements.101 While EPA needs some scientific evidence to justify categorizing a substance as one of the regulated hazardous wastes or [*pg 87] substances under CERCLA, EPCRA, and RCRA,102 the evidence need not (and in any case, cannot) be complete or definitive.103



Even if evidence that agencies actually rely on flawed science is scant, there is perhaps no harm in imposing additional checks to ensure that their science is sound. In the case of the good-science reforms, however, it is quite possible104 that the reforms will have a significant adverse impact on the quality, quantity, and transparency of agency science and on the cost and accessibility of the administrative process more generally. Both sets of "worst case" effects are detailed in turn.105

A. The Agency Science Problems Might Be Made Worse Rather than Better

Although the idea behind the reforms is to improve the quality of the science that undergirds regulation, it seems quite possible that precisely the opposite will happen. Agencies might avoid disseminating or relying on scientific [*pg 88] information whenever possible. High-quality scientific research could be lost to the strong forces of politically motivated deconstruction,106 and the already dampened incentives for the production of public health and environmental research will be further reduced.

1. Shrinking the Pool of Scientific Studies Available to Justify Regulation

a. Reducing the number of studies disseminated or used in regulation.
The most devastating result of the good-science reforms is their potential for converting science from the agencies' friend to their enemy.107 Agency risk assessments that rely on cutting-edge studies will be rewarded with bothersome Data Quality Act complaints, requests for data access, and potential challenges under regulatory Daubert.108 Agencies might find that incorporating or even publicizing recent scientific discoveries in the course of their regulatory duties is a losing proposition since it opens them up to unending attack under the good-science laws.109 As a result, the dissemination and use of cutting-edge science [*pg 89] might itself become "ossified" in the administrative process.110 Scientists might also be reluctant to share the results of path-breaking studies with agencies for fear of having their research tarnished by good-science complaints.111

To the extent that an authorizing statute permits protective regulation with little science, the good-science reforms might also encourage agencies to make regulations on as thin a scientific record as possible.112 Agencies already appear poised to avoid identifying scientific studies as "influential" since this invites more burdensome "reproducibility" requirements under OMB's Data Quality Guidelines.113 Agencies might not only downplay influential studies; they might downplay scientific studies altogether, relying instead on "policy choices" and other discretionary bases for decisions.114

Certain types of studies might be especially vulnerable to challenges under the good-science reforms and could consequently be under-utilized by the agencies.115 The first are meta-analyses and mechanistic models that link distinct [*pg 90] databases or research findings together to derive a more holistic picture of an ecological or public health problem.116 These unconventional but much-needed forms of analysis are replete with features vulnerable to challenge relative to standardized, individual studies.117 To avoid such challenges, an agency might opt for less ambitious models, studying each hazard as narrowly and simply as possible.118 Rather than combating agencies' tendency toward reductionist analysis, then, the good-science reforms might encourage it.119

The second category of studies vulnerable to attack under the good-science reforms are those that pertain to especially controversial regulations. Even without the good-science reforms, an influential study like the Six Cities Study would receive a great deal of attention from the scientific and regulatory communities because of its significance to public health and the economy.120 This type of controversial study will likely receive only that much more extended vetting under the good-science reforms. Indeed, if the data access and data [*pg 91] quality reforms had applied at the time, the Six Cities Study might still be mired in access requests, complaints, and re-analysis studies, some of which would challenge peripheral, technical features of the study that would require detailed agency responses.121 Although agencies might be able to forge ahead and prom ulgate rules based on such contested studies,122 the wiser course would be to wait for the controversy to resolve itself or to avoid controversial studies or rulemakings altogether.123

Finally, increased procedural requirements for agencies' use of science might cause them to become more resistant to using science in their public outreach efforts.124 Data Quality Act requirements might also cause agencies to refrain from policing the quality of information in the public domain, such as bad information disseminated by private parties.125 Under such a regulatory process, minimizing agency encounters with technical information is the optimal survival strategy.

b. Discouraging the production of science.
Another worrisome effect of the good-science reforms is the subtle shifting of the burden of producing and then defending science from industry to agencies. Scientific research offers valuable information for environmental decisionmaking, and although the limits of science prevent us from obtaining complete answers, there are a number of questions for which our ignorance is remediable. Despite this fact, over the past thirty years, our scientific knowledge regarding public health and environmental harms has advanced very little. We still have no accepted way of evaluating damage to ecosystems126 or of inexpensively testing for neurological and [*pg 92]developmental harms in laboratory animals (or humans).127 We have shockingly little baseline data about environmental quality.128

Nevertheless, the good-science reforms zero in on the few studies that have been done and suggest that they be done better,129 in spite of the limited federal support for environmental research130 and a growing recognition that research [*pg 93] on environmental problems is under-produced by private parties.131 Rather than financing more studies, scarce resources will be spent debating the quality of the handful of studies that have been done.132

By contrast, the private sector can avoid regulation simply by resisting the production of valued research.133 There is little incentive for a regulated entity to invest in voluntary research that could produce results that not only lead to more stringent regulatory requirements,134 but that could impair, rather than improve, the marketability of its products.135

Finally, a number of commentators have expressed concern about the effects of the good-science reforms on scientists themselves.136 Both the American Association for the Advancement of Science ("AAAS") and the NAS have [*pg 94] expressed their concern that the Data Access and Data Quality Acts could be used to harass scientists and discredit good research.137 Preliminary experience with these reforms lends credence to these concerns.138 It is also possible that the threat of these challenges could discourage scientists from undertaking research on issues relevant to regulation. Threatening letters recently sent to academic institutions by an industry-funded nonprofit, the Center for Regulatory Effectiveness ("CRE"), warning them to adopt procedures to comply with the Data Quality Act, only serve to reinforce fears that the good-science reforms might be used to intrude on scientific freedom.139 The costs imposed on[*pg 95] scientists' time and reputations by the good-science reforms could ultimately deter the most ambitious scientists from research that has any bearing on policy. Again, this result seems to be precisely the opposite of what is sought by the good-science proponents.

2. The Quality of Regulatory Science Might Be Impaired Rather than Improved

a. A dysfunctional peer-review process.
The good-science reforms are expected to improve the quality of science by expanding peer review to a larger circle of interested parties, including those who have a stake in the outcome of studies that inform regulation.140 But this approach is the antithesis of the way that science is supposed to operate.141 Scientific peer review is supposed to consist of unbiased external review by those trained in the scientific method.142 Over the years, there has been considerable discontent with the peer-review process, much of it related to concern over the lack of objectivity of those doing the reviews. Even matters as seemingly insignificant as the affiliation or fame of the researcher can affect the outcome of peer reviews.143 These concerns do not portend well for extending peer-review responsibilities to biased regulatory participants. An expanded peer-review process conducted by those with the high[*pg 96] est stakes in the outcome is not likely to improve the quality of the scientific research.144 Instead, resourceful stakeholders will make mincemeat of virtually every regulation-relevant study they find distasteful.145 As Professor Jasanoff observes: "The most obvious source of bias in the regulatory environment . . . arises from the fact that expert referees may either be formally affiliated with particular interest groups or otherwise have a stake in the outcome of the regulatory process."146 Good science is unlikely to result from this bad process of peer review.147

Even the Daubert reform, which employs the courts as the penultimate gatekeeper of the quality of agency science, seems destined to impair, rather than improve, the quality of agency science.148 History has shown, in part from the inadequacy of the Daubert opinion itself, that the courts are unsuccessful in defining "good science" with any precision or in serving as rigorous reviewers of agency science.149 Moreover, in contrast to the agencies, judges have neither the [*pg 97] information nor the expertise to competently adjudicate challenges to the quality of agency science.150 Still more curious, it is not clear what oversight role the courts are being asked to play. As Professor Pierce has pointed out in recent commentary, while the reasons for the courts' gatekeeping role in jury trials are apparent, it is not clear why appellate courts, rather than agencies, are making choices about which studies are unreliable. The courts are certainly not ensuring that the science is sound so that some other, less expert party, like a jury, can rely on it. Yet if the courts' scientific competency is less than that of the party they are reviewing, it is unclear what the courts are contributing to the exercise.151 In all likelihood, as is borne out in past cases, the courts will either rubberstamp agencies' science or undertake a searching review.152 Neither is likely to provide a valuable contribution to agency decisionmaking. In those instances in which courts have undertaken searching reviews of agency science, their opinions often reinforce rather than allay concerns about judges' scientific competency.153 The possibility of judges introducing their own biases through such a process presents yet another significant risk.154

[*pg 98]

b. Exempting industry data from the good-science requirements.
Ironically, the good-science reforms target federally funded research and exempt a large body of private science used extensively in licensing and permitting decisions.155 The reforms thus leave untouched much of the regulation-relevant science that has historically been the most problematic in terms of fraud and bias.156 The Data Access Amendment requirements apply only to federally funded research that supports a federal action.157 The DQA requirements apply only to science that is "disseminated" by an agency,158 and they appear to exempt studies produced by a company to support an application to market a product or obtain a pollution permit.159 OMB has also interpreted the term "dissemination" to exempt "public filings," which could encompass industries' documentation of compliance with the environmental laws, including the basis for their Toxics[*pg 99] Release Inventory estimates submitted under EPCRA.160 Together, these DQA exemptions seem to insulate virtually all mandated industry research from the good-science requirements.

Broad exemptions of industry information are reinforced by protections available through trade secret law and confidential business protections.161 OMB honors these protections in defining the types of science that can be subject to challenge, despite the fact that the universe of confidential information appears much larger than is justified by the actual risks to intellectual property.162

It is perhaps even more illogical that the DQA requirements, which are designed to root out biased science, do not require disclosure of the institutional affiliation or source of funding for studies or critiques prepared by interested parties.163 Thus one, albeit imperfect, proxy for ensuring objectivity in research that is used by most scientific journals -- scientist affiliation and funding -- is omitted in a law that purports to make scientific objectivity a primary goal.164

B. Harm to Administrative Processes

Good-science reforms are not only capable of impairing, rather than improving, the quality of science underlying public health and environmental [*pg 100] regulation; they also threaten to impair statutory and administrative processes more generally. By imposing new procedural requirements on agency use of scientific evidence, the reforms might conflict, either directly or indirectly, with agencies' statutory mandates that they act expeditiously and err on the side of health and the environment. Refocusing regulatory deliberations on technical minutia, such as the methodological rigor of individual scientific studies, could also alienate the attentive public, who have neither the resources nor the expertise to engage in these reductionist battles that impact regulatory policy in important but subtle ways. Finally, the reforms could drain scarce agency resources without any perceptible benefit to the quality of regulatory outputs.

1. Conflict with Statutory Mandates
The good-science reforms might undercut the ability of agencies to set protective standards pursuant to their statutory mandates in a variety of ways. In some settings, the good-science reforms might directly conflict with an agency's mandate. For example, to the extent that Congress explicitly requires EPA to consider all "available information" in promulgating protective standards, the exclusion of studies under the good-science reforms conflicts with EPA's mandate.165 Similarly, when Congress commands EPA to protect the public health from the release of hazardous substances under the Superfund statute, it explicitly directs that disagreements over the quality of EPA's science be set aside until after cleanups are completed.166

The reforms might also be read to suggest that, contrary to authorizing statutes, a few studies are not enough to justify regulation, or that each science-policy or "default judgment" in a risk assessment must be backed by scientific support.167 A number of the petitions filed under the Data Quality Act have challenged policy choices made at these default junctures.168 If these challenges [*pg 101] are successful, they could effectively halt protective standard-setting by requiring a definitive body of research as a prerequisite to regulation.169

Finally, the good-science requirements could be read to limit the studies that can be considered in regulatory decisionmaking. In a recent complaint filed under the DQA, for example, affected industries argue that EPA cannot consider studies published by researchers at Berkeley in assessing the risks of the herbicide Atrazine because the studies were not done pursuant to protocols validated in advance by the agency.170 If the industry's DQA complaint is successful,171 the good-science reforms will have transformed a statute that places the burden on industry of showing that pesticides are safe172 into a statute that not only places this burden on EPA, but requires that research undergo a centralized approval process before it can inform regulation.

The effects of regulatory Daubert on agencies' ability to promulgate protective standards are less clear, but one strain of the proposal could cause even greater conflict with the protective mandates by essentially barring protective standards altogether. In a subset of tort cases in which Daubert has been applied, the courts have made de novo judgments about whether a plaintiffs' scientific evidence, in total, provides a "sufficient" basis to meet the "more probable than not" standard of causation.173 When judges decide that the weight [*pg 102] of the evidence does not support a finding of causation, they exlcude the expert testimony under Daubert. Some courts have also applied Daubert and concluded that animal studies are insufficient to create a jury question on causation, in part because they are an unreliable way of determining potential harm to humans.174 If appellate courts use these cases as precedent for applying regulatory Daubert, they could also exclude most of an agency's basic studies, thereby undercutting the commands of the precautionary legislation.175 Even if regulatory Daubert is implemented in a more innocuous fashion,176 challenges could still divert agency resources and slow the standard-setting process.177

2. Eliminating Participants and Skewing Review Toward Those with the Highest Stakes
The good-science reforms offer new procedural tools for regulatory overseers (loosely referred to as "the public") to challenge the science underlying regulation; yet because these tools require considerable expertise from users, they are effectively available to only a small set of attentive regulatory participants.178 The resources needed to employ these tools, in terms of both scientific [*pg 103] expertise and resources, will exclude most attentive regulatory participants. It is thus no surprise that a majority of the DQA petitions filed to date have been submitted by regulated industries or their advocates.179

To the extent that participation is skewed towards those who have both a vested interest in the outcome of the regulatory project and a high level of expertise and resources (namely, regulated industry), the reforms will be used [*pg 104] only to combat perceived problems of over-regulation.180 If science has been misused in ways that under-protect the public, there might not be sufficient public-interest advocates with the resources and expertise to catch the problem.181 Even if public advocates were plentiful, however, challenges to the science underlying regulation only serve to delay protective standard-setting, thus forcing these advocates to choose between the lesser of two evils -- unjustifiably delayed standards and unjustifiably weak standards.182

Well-heeled participants might not only use the reforms legitimately, but also illegitimately, and in ways that harm the public interest.183 Petitions against [*pg 105] the quality of an agency's science, for example, can be filed at any time, by anyone, and can include as many complaints and challenges (including those directed at the agency's policies) as the petitioner desires. There are no meaningful costs or sanctions for filing meritless complaints under the DQA or for requesting data and re-analyzing it in problematic ways under the Data Access Act.184 In contrast, the benefits of abusing these provisions can be considerable to private parties; at best, they can lead to the exclusion or discrediting of pivotal studies that undergird protective regulation, and, at worst, they can divert an agency's resources and priorities away from developing protective policies.185

Perhaps more damaging than the fact that only an elite group is able to benefit from the reforms is the potential for the reforms to be used, intentionally or inadvertently, to distance or even exclude attentive participants from regulatory deliberations. These means of exclusion are subtle, but potentially of great significance. First and most pervasive, by focusing rulemaking controversies on the detailed and often arcane methodological features of individual studies -- rather than on the much more significant policy decisions that fill the large gaps that science leaves behind -- environmental regulation is portrayed as being predominantly about science, and, even more misleadingly, about the details of individual studies.186 Virtually every DQA complaint filed to date exemplifies this type of covert attack on agency precautionary policies under the charade of a challenge to bad science.187 Indeed, in the course of responding [*pg 106] to incremental DQA complaints, often filed on individual studies without any statutory reference point, agencies may find themselves making important policy decisions under the inauspicious cover of a DQA challenge.188 Thus, the good-science reforms can be strategically used to obfuscate the pivotal role that policy, rather than science, plays in environmental decisionmaking and preclude broader public interests from weighing in on what, at bottom, are fundamental policy decisions.

Additionally, by suggesting that current regulations are based on "bad science," the good-science reforms can further erode public trust in agencies and the work they do.189 Studies have shown that trust is asymmetrical: it is easy to lose and hard to gain back.190 By implementing reforms premised on the proposition that agencies are unable to use science wisely, the reforms needlessly destroy faith in the regulatory process.

3. Increasing Administrative Costs
Although it is difficult to predict at this early stage how costly implementation of the good-science reforms will be, the costs could be very high.191 Both the data access and data quality reforms necessitate elaborate record-keeping192 and raise a number of unresolved and potentially litigious questions, some of which are already presenting themselves in the courts.193 The high level of scien[*pg 107] tific activity within agencies like EPA and OSHA also provides ample opportunity for regulatory critics to file correction requests, gain access to and re-analyze data, or otherwise engage agencies in time-consuming bureaucratic reviews.194 Although to date the DQA complaints appear to be surprisingly few, most of the filings are technically detailed and attack the most important regulatory projects.195 An industry-sponsored organization even filed a twelve-page preemptory letter warning EPA that it would file a DQA complaint if the agency relied on comments submitted by an environmental organization.196 The absence of deterrents for nonmeritorious DQA complaints and unnecessary data access requests,197 moreover, makes abuse of process essentially costless.198

[*pg 108]

The costs of the Daubert proposal are even more difficult to predict because the proposal itself is unclear,199 but since all of its variations increase the scrutiny of -- and, hence, the extent of litigation over -- agency regulatory activities, the regulatory-Daubert reform could be exceedingly costly.200 If the appellate courts essentially conduct the equivalent of in limine Daubert hearings on challenged science, the costs to the courts and agencies might be higher still.201 In limine hearings can take days, the opinions are often lengthy, and they tax the parties' and courts' resources.202 At the same time, the benefits of the regulatory-Daubert proposal seem the most dubious.203

[*pg 109]

C. Summary

If these worst case predictions of how agency officials might react to the good-science reforms are even close to plausible, it is likely that the reforms will make agency science worse rather than better, while simultaneously exacerbating other types of dysfunction already present in the regulatory process. But even if these multiple concerns are overly cautious, or significant problems with the quality of regulatory science have been overlooked, it is clear that in the current debates, analysis has taken a back seat to the reform proposals.



Over the past decade, good-science reforms have not only dominated, but effectively side-tracked efforts to study and improve the use of science in public health and environmental regulation. Virtually every legislative effort to reform science-based regulation addresses only the quality of the scientific studies that underlie regulation.204 But as the frenzy over the quality of individual studies used in regulation continues, more significant issues concerning the quantity, or weight, of the available science needed to justify and support protective regulation are ignored. Important science-policy questions relating to the amount of science required to justify a protective standard, and the amount needed to rebut the presumption of protection once a standard is in place, receive little attention, despite the fact that the answers to these questions are far from clear.

The quantity or weight-of-the-evidence problem originates from the difficulties in rebutting a central and arguably inescapable assumption of protective [*pg 110] regulation: the "protective assumption."205 The protective assumption presumes that there is no safe dose for a variety of strong carcinogens and other toxins.206 Existing scientific knowledge and precautionary mandates work together to support this assumption,207 but it introduces two enormous difficulties for the practice of regulation. First, the protective assumption implies that the safe level for any activity or substance is zero or near zero, and this has costly implications for regulation.208 In some situations, this expensive consequence of the protective assumption is avoided simply by narrowing the set of substances or activities to which the assumption applies209 or by switching to a science-blind basis for setting regulatory standards, such as technology-based standards.210

Second, because of pervasive uncertainty, it is not clear when the protective assumption can be satisfactorily rebutted with additional scientific research. [*pg 111] This is the "rebuttal problem." While mechanistic studies and epidemiological research can sometimes support a "best" scientific guess that there is, in fact, a threshold below which a particular carcinogen is safe, this research generally remains inconclusive.211 Thus, there is little scientific guidance for determining the point at which existing research satisfactorily demonstrates that there is a safe dose for a given toxin, or determining what that safe dose might be.

Much like the uneven parallel bars in gymnastics, then, the quantum of evidence needed to invoke the protective assumption is generally lower than the quantum or weight of the evidence needed to rebut the presumption once established. As a result, regulated entities work hard to keep substances out of the category of substances to which the protective assumption applies. At the same time, regulated parties might charge an agency with using "bad science" in refusing to conclude, based on the weight of the evidence, that the protective assumption has been successfully rebutted, even though this rebuttal cannot be based exclusively on science in most cases.212

The near impossibility of rebutting the protectionist assumption might explain why Justice Stevens struck down OSHA's expensive worker-protection standard for benzene and raised the quantum of evidence needed to justify a protective standard.213 Discomfort with the onerous regulatory consequences that flow from the protectionist assumption also seem to lie at the heart of recent critiques of environmental regulation by Justice Breyer and other prominent scholars, who condemn agencies' tendency to over-regulate a few substances (by applying the protectionist assumption) while under-regulating or ignoring the vast majority of toxic substances.214 Even the examples given by [*pg 112] good-science proponents reflect frustration with the low quantum of evidence needed to establish stringent regulation and the high quantum required to rebut the protectionist assumption.215

In this Part, I argue that the good-science problem is in large part a result of the disagreements over the quantity or weight of science needed to set and then rebut protectionist policies, rather than the quality of that science. The misdirected focus on the quality of science might be an accident,216 although it is more likely to be at least partly deliberate. Regulated parties fare better when the focus is on the quality of scientific research, rather than the value choices undergirding protectionist policies.217 Whatever the reason, only after the underlying problem is correctly diagnosed will it be productive to develop competing reform proposals.218 This Part attempts to begin such a dialogue on [*pg 113] alternate conceptions of the fundamental problem in defining the role of science in regulatory policy.

A. The Rebuttal Problem

Public health and environmental mandates generally take a precautionary approach to regulation.219 Because of the potential for catastrophic loss220 and the inadequacies of the tort system,221 environmental and public health statutes direct agencies to err on the side of protecting public health when promulgating regulations, even when information regarding the probability of harm is incomplete.

Under these protective statutes, agencies often have the initial burden of establishing that regulation is needed, but this burden is lighter than the common-law requirements for causation.222 Uncertainties are generally resolved in favor of protection. This includes the adoption of conservative working assumptions, supported in part by science and in part by protectionist mandates, that there is no safe dose for carcinogens and that animals respond to toxins in ways similar to humans.223 Backed by these conservative working assumptions, as long as the requisite quantum of scientific evidence (the quantum varies among statutes) suggests that there is no safe dose for a toxin,224 an agency's burden is met under a protective mandate, and it may promulgate standards that ensure the requisite level of protection specified by the statute.225

[*pg 114]

The "rebuttal problem" arises in determining whether scientific evidence, often produced after a protective standard has been promulgated, successfully rebuts the assumption that there is no safe dose for a substance.226 To rebut the protective assumption, a regulated entity or other party must produce scientific research that reveals either that the no-threshold assumption does not apply or that the toxin does not affect humans in ways similar to animals. Such a rebuttal, however, is likely to be incomplete because of the inability to validate this alternate evidence through human testing.227 Without some artificial policy determination about when scientific evidence is sufficient to rebut the protectionist assumption under a particular statute, the protectionist assumption could become effectively irrebuttable. An agency could reasonably insist on almost definitive proof of safety before loosening a standard or allowing a product on the market.228

Congress arrived at an ingenious accommodation to the rebuttal problem in several protective statutes by circumventing science and basing pollution standards on reductions achievable through application of certain levels of pollution-control technologies.229 By using technology-based standards, agencies are [*pg 115] able to avoid decisions about the amount of scientific evidence needed to justify or rebut a presumption of protection.230

In the remaining statutes, however, Congress provides little guidance either on the point at which evidence supports a protective standard or, more importantly, the point at which the protective assumption can be rebutted.231 When is a Superfund site clean?232 At what point is hazardous waste no longer hazard[*pg 116] ous?233 When does scientific research convince us that a congressionally listed toxic air pollutant is not really toxic?234 Combining existing scientific research with the protective assumption of no safe dose, the true answer in most instances is that "clean" or "nonhazardous" will be achieved only when the presumptive toxins are eliminated, or nearly so.235 Since an agency cannot insist on pristine conditions or zero pollution, however, it finds itself preoccupied with subtle adjustments to the quantum of evidence needed to support or to rebut [*pg 117] protective standards, with a resolution emerging (if at all) only after years of regulatory struggling.236

The absence of clear rebuttal criteria for protective standard-setting is often mistaken for bad science. The agency, it is argued, is scientifically incompetent in adhering to an overly protective standard because "good science" suggests that the presumption of protection is no longer warranted.237 In truth, the disagreement is not a scientific dispute over the quality of individual studies, but a science-policy disagreement over the quantity or cumulative "weight" of the available science needed to rebut the presumption of protection. Certainly the [*pg 118] quality of each individual study is one factor in the rebuttal assessment, but that quality is a relative consideration, and its import depends on the results of the study, how it relates to other available studies, and the statutory demands for precaution.238 Focusing excessive attention on assessing the quality of each individual study in fact deprives the agency of a more comprehensive method of evaluating the evidence that considers the cumulative results of all available evidence.239

B. Institutional Adjustments to Counteract the Rebuttal Problem

Legislative and agency silence regarding how much evidence is sufficient to first support and then rebut various protective standards creates an opportunity for mischief. Not surprisingly, agencies, courts, executive branch officials, and even Congress have attempted to circumvent the rebuttal problem in several subterranean ways. In some cases, the adjustments work to heighten the standard for inclusion so that fewer substances are regulated in a precautionary way. In other cases, the adjustments adopt ad hoc rebuttal criteria that weaken protective regulation, often invisibly.

1. Raising Agencies' Initial Burden of Proof
The first and crudest approach to circumventing the rebuttal problem is simply to increase the quantum of research required to justify a protective standard. Instead of clarifying the amount of evidence needed to rebut a protective standard, this adjustment simply works to reduce the number of substances for which an agency is able to support a protective standard in the first instance.240 This results in fewer rebuttal determinations because there are fewer protective >[*pg 119] standards to rebut.

The most notorious example of this approach is Justice Stevens's interpretation of OSHA's mandate in the Benzene Case.241 Concerned that absent judicial intervention, workplace safety standards could be exceedingly costly because of the protectionist mandate and impossible rebuttal,242 Justice Stevens interpreted the Occupational Safety and Health Act in a way that increased the amount of evidence required for the agency to justify a standard.243 This increase in the agency's burden of proof ensured that fewer standards would be promulgated, thus substantially reducing the occasions on which the rebuttal problem would arise. Legal scholars have questioned the correctness of Justice Stevens's interpretation.244 Nonetheless, it has remained influential in interpreting the quantum of scientific evidence needed to promulgate protective standards -- not only under the Occupational Safety and Health Act but under other protective mandates as well.245

Congress and the executive branch have also attempted to raise the burden of proof for agencies under protective mandates, thereby reducing the number of difficult rebuttal determinations. The good-science reforms, along with a small graveyard of unpassed regulatory reform bills, impose various quality checks on agency science that cumulatively serve to raise the burden of proof [*pg 120] that must be met to justify protective standards.246 In fact, under some rejected regulatory reform bills, agencies would have been required to support proposed standards with economic studies showing they were cost-justified as well as with a variety of elaborate risk evaluations.247 Agencies and the White House, particularly under administrations that favor deregulation, have also interpreted agency authority in ways that have increased the level of scientific evidence needed to justify regulation.248 Under these internal directives, agencies must ensure that their science is not only of high quality, but also of sufficient magnitude or quantity, before "rushing" to regulation.249 These initiatives conflict with the spirit and purpose of protective mandates.250

Increasing the quantum of evidence needed to justify regulation still leaves the rebuttal problem untouched. While these initiatives serve to reduce the [*pg 121] number of standards promulgated, once a protective standard has been justified, it is still onerous and effectively irrebuttable. This then leads to the over- and under-regulation problem that troubles Justice Breyer and other regulatory scholars.251 In the case of substances for which there is considerable research suggesting a hazard, agencies can still regulate stringently, in some cases defaulting to a presumptive standard of zero. In other cases, in which the evidence does not meet the high threshold for what is required to justify a protective standard, there will be no regulation whatsoever.

2. Cost-Benefit Analysis
Cost-benefit analysis requirements present an economic approach to rebutting a zero or near-zero standard for a presumptive human carcinogen by demanding that the benefits of ensuring protection exceed the costs of imposing a protective standard.252 Although there are many legitimate criticisms of loosening protective standards based on such economic valuations, theoretically the idea of rebutting the precautionary assumption with economic-based factors has some appeal.

The science-related problems arise, even if one is willing to tolerate the valuation of human lives (which many are not), in lumping carcinogens and other hazards whose risks can be quantitatively approximated with hazards whose risks cannot.253 Since use of cost-benefit analysis to rebut protective regulation depends, by definition, on quantification of risks and benefits,254 risks [*pg 122] and other harms that cannot be quantified are generally ignored.255 As a result, only a fraction of the risks posed by a substance might be counted in a cost-benefit analysis. For example, neurological, hormonal, and developmental harms are often ignored in cost-benefit analysis because their risks cannot be quantified.256 Protective policies can look quite expensive when there are few quantitative benefits to monetize, and protective regulations sometimes fail cost-benefit tests for precisely this reason.257

The use of cost-benefit analysis to rebut and sometimes to justify protective standards is mandatory under a few statutes.258 Under one provision of the Toxic Substances Control Act ("TSCA"), EPA is actually required to support its regulations with a cost-benefit analysis that effectively raises the agency's burden of proof to the point at which most protective regulatory actions fail.259 Under several other statutes, cost-benefit analysis remains a tool for rebutting the presumption of protection: after an agency justifies a protective standard, [*pg 123] opponents must show, with a cost-benefit study, that less protection is warranted given the costs and benefits of the standard.260

The most prevalent use of cost-benefit analysis occurs informally, typically under an executive order requiring cost-benefit analyses for all economically significant rulemakings.261 In this setting, cost-benefit analyses are used to reorder agency priorities or to stall or foreclose some protective rulemakings.262 The legality of using such analyses for protective standard-setting is questionable.263

3. Rudderless Rebuttal Decisions
Agencies appear to make the majority of their rebuttal decisions on a case-by-case basis. Most of these decisions are made independently, without the benefit of an over-arching set of rebuttal guidelines. Indeed, there is little indication that agencies make any effort to ensure consistency in making rebuttal decisions from one standard to the next.

This individualized approach has proved useful to agencies in escaping the scrutiny of courts and other critical onlookers.264 Since the courts struck down OSHA's heroic efforts to develop universal principles for protective standard-[*pg 124] setting,265 OSHA and other agencies that want to avoid similar outcomes in the future focus on only one hazard at a time, supporting decisions with highly technical records that do not purport to provide consistency from case to case.266 Indeed, in some cases judicial review not only encourages agency decisionmaking to be "rudderless," but actually increases regulatory incoherence, in large part through unhelpful rulings on rebuttal determinations.267 For example, in Chlorine Chemistry Council v. EPA,268 the D.C. Circuit struck down EPA's zero-[*pg 125] standard for chlorine in drinking water, finding that the conservative assumptions supporting the standard had been rebutted by the "best available" evidence. Yet the court gave EPA no guidance for determining the point at which scientific evidence is sufficiently robust to rebut a conservative assumption or for weighing the rebuttal evidence in reaching a new standard.269

Agencies might also find that individualized rebuttal decisions are easier to navigate through the checks and balances of the political system than decisions made in a more systemic and open manner. OMB staff and hostile congressmen usually do not have time to extract policy judgments from individualized, technical rebuttal decisions.270 Indeed, the more opaque an agency's explana>[*pg 126] tion of its rebuttal determination, the more likely the regulation will survive second-guessing from outside reviewers.271

It is thus no surprise that under some protective mandates, agencies gravitate toward individualized rebuttal determinations that can appear inconsistent when viewed as a whole.272 EPA's Superfund cleanup decisions have been notoriously variable, leading not only to a lack of predictability,273 but to arguments that wealthier communities enjoy cleaner cleanups than poorer communities because of their political clout and greater resources for participation.274 Professor McGarity has found that the rebuttal determinations made for pesticide-specific food tolerances under the Food Quality Protection Act are inconsistent, and that some tolerances violate the explicit objectives of the statute.275 EPA has no uniform criteria for assessing the point at which ambient standards for air or water pollutants can deviate from the protective assumption.276 In fact, the absence of coherent rebuttal principles under section 109 of the CAA was the primary complaint of the D.C. Circuit in invalidating EPA's particulate and ozone standards in American Trucking Ass'ns v. EPA.277 EPA's decisions for[*pg 127] delisting air toxins also appeared to proceed without any unified criteria for determining when the no-threshold presumption is ultimately rebutted. Similarly, EPA's review of existing toxic substances under TSCA lacks central principles for determining the appropriate testing requirements for suspect chemicals.278 Finally, the agency's increased reliance on negotiated rulemaking and related forms of consensus-based regulation indicate that ad hoc, case-by-case decisionmaking might be EPA's preferred alternative for reaching rebuttal determinations in a number of circumstances.279

While making rebuttal decisions on a case-by-case basis has allowed agencies to make substantial regulatory progress, this progress has not been costless. These individual determinations have in some cases compromised or even conflicted with protective statutory mandates, have obfuscated underlying policy decisions, and have proved effectively unreviewable for most interested onlookers, especially those who have relatively few resources and little technical expertise.

C. Reform

The scope and pervasiveness of the rebuttal problem might prove too great for comprehensive reform. Ideally, though, a system-wide reform should be implemented to ensure consistency and better opportunities for the public vetting of critical policy decisions. The remainder of this section puts incremental reforms to the side and sketches the contours of a large-scale transformation.

1. Defining the Problem
The first step towards large-scale reform is an accurate characterization of the problem. To this end, more work must be done to study the rebuttal prob[*pg 128] lem under various protective statutory mandates. Professor McGarity's work on the implementation of FQPA provides an excellent model of the kind of in-depth research that can be done to better understand the rebuttal problem.280 Other studies of agency decisionmaking under protective mandates provide similarly valuable insights about the rebuttal decisions, although some studies might need to be updated or focused more specifically on the agency's rebuttal determinations.281

2. The Steps for Reform
If additional research confirms the existence of the rebuttal problem, the agencies' task is relatively straightforward: They must specify rebuttal criteria. To be done properly, this requires three separate steps, though the first might be adequate in many circumstances.

a. Statute-specific interpretations of rebuttal requirements.
Agencies must first provide a general statement of how they interpret each mandate with respect to the quantum of evidence needed first to justify and then to rebut the protective working assumptions.282 Each statute-specific interpretive statement must provide an indication of how precautionary the statute is with respect to rebuttal, as well as how the protective mandate compares with other statutes.283 After the evidentiary demands of a protective mandate are interpreted in narrative terms, they can then be translated from general evidentiary requirements into technical specifications.

b. Technical criteria for rebuttal.
In the second step, agencies should specify the kind of research needed to justify and rebut the protective standards under individual regulatory programs.284 In listing toxins as hazardous substances under EPCRA, for example, EPA has developed technical specifications that effectively establish the quantum of science required to justify (but not rebut) a listing decision.285 While these specifications could be more detailed, they are nevertheless a strong effort toward technically delineating the agency's burden of proof.286

For rebutting a presumption of protection, a similar checklist could be devised, although more weight-of-the-evidence judgments would need to be made in determining whether particular protective assumptions had been rebutted.287 These rebuttal criteria could specify the general types of research that help to rebut the protectionist assumption for categories of substances, along with some indication of the level of certainty required for such a rebuttal under a given statutory program.288 Technical specifications for rebuttal could also include the establishment of a "rebuttal process" that specifies the types of review available and the opportunities for notice and comment.

c. Peer review of individual rebuttal determinations.
Under limited circumstances, science advisory boards could be used to review agency rebuttal determinations.289 These boards could provide one of two services. First, they could review agencies' general, statute-specific scientific criteria for rebutting protective regulations.290 Second, in extraordinary cases, the boards could review[*pg 130] agency rebuttal determinations for individual substances before rulemakings on those substances are final.291 With respect to either service, these boards would make their rebuttal determinations in accordance with agency interpretations of the applicable statute.292 Care would have to be taken, however, to ensure that the peer-review process does not intrude upon the substantial policymaking required in making rebuttal determinations.293

3. The Form of Reform
Expecting agencies to voluntarily detail the criteria they use when making rebuttal decisions is naive. The pervasiveness of the rebuttal problem suggests that, as an institutional matter, clear rebuttal criteria are either unimportant to agency survival or could actually work to impede an agency's ability to achieve its regulatory objectives. Indeed, a general lesson that emerges from studies of protective regulations is that most institutional actors benefit from keeping policy choices opaque rather than transparent, especially when the policies are controversial.294 The more transparent the underlying issue becomes, the more likely the decision will become controversial, attract comments, and be challenged through political or judicial review.295

Realistically, pressure on agencies must come from the outside to encourage them to provide well-defined rebuttal criteria. Executive branch initiatives could improve the situation, but legislation would seem to be the most stable and comprehensive method of forcing agency change. For example, the reform legislation could encourage, but not require, agencies to establish explicit inclusion and rebuttal criteria for standards promulgated under individual protective mandates. Agencies' failure to develop these dual criteria would not be grounds for judicial review, but once agencies did develop these policies, the [*pg 131] criteria would be a final action subject to judicial review.296 To give agencies the incentive to develop these policies, Congress could provide rewards. For example, Congress could specify that all individual regulations promulgated in a way that is "not inconsistent"297 with the agency's inclusion or rebuttal criteria must be upheld by the courts unless a standard has no conceivable basis in fact or law.298

Whether there is sufficient political interest to coax Congress into passing such legislation is less clear. As described, regulated entities enjoy both gains and losses from the current, multi-faceted approach to protective regulation.299 If a cross-section of stakeholders take interest in the problem, however, Congress might enjoy political gains from providing more systematic directions on the amount of science needed to rebut protective standards.300

[*pg 132]

4. Summary

Whatever the mechanism for accomplishing reform, the result could bring substantial benefits to the regulatory process. First and foremost, identifying more concrete inclusion and rebuttal criteria would provide fairer and more consistent regulatory outcomes. Without clear guidelines, agency staff enjoy nearly complete discretion in promulgating protective standards and other regulations. The resulting standards sometimes deviate from statutory goals or administration policy in ways that escape notice.301

Second, clarifying the inclusion and rebuttal criteria would help focus the issues for judicial review and ultimately reduce the variability in the outcome of such review. In part because the underlying issues -- the standards for inclusion and rebuttal -- have been ignored in various challenges, the courts of appeals have taken wildly different approaches in their review of protective rulemakings.302 In some cases, the agency receives great deference with regard to its regulatory decisions; in other cases the court reviews the agency's scientific determinations in tedious detail; and in still others, the court second-guesses agency policy decisions as if they were predominantly technical. Without explicit guidance from Congress or the agencies, this erratic review of agency science is forgivable. Establishing explicit standards for protective regulation should go a long way in providing courts with the guidance needed to isolate the true issues in dispute.

Third, since inclusion and rebuttal criteria must be based on the level of protection specified under individual protective mandates, the rebuttal reform embraces, rather than ignores, authorizing statutes. As discussed above,303 this refocusing on original statutory goals represents a marked improvement over the varied adjustments that have emerged over time. Many of these adjustments not only ignore statutory mandates, but directly conflict with them.

Finally, when it is recognized that the core problem in the use of science in regulation concerns the quantum of evidence necessary to justify and then overturn protective regulation, rather than simply the quality of individual studies, the problem is clarified as being in large part a science-policy question: that is, when is the accumulation of evidence enough to justify the acceptance of a lower level of protection? This clarification will provide much more meaningful opportunities for an attentive public to contribute to the regulatory process.



As their name implies, the "good science" reforms tend to focus exclusively on the quality of agency science. A close look at much of the public health and environmental regulation put in place over the past three decades suggests, [*pg 133] however, that the real problem is not with the quality, but the quantity of science needed to justify and support protective regulation. Rather than direct administrative attention toward reviewing individual studies and determining whether they are scientifically acceptable, as the good-science reforms do, the rebuttal problem refocuses attention on addressing the issue of when the cumulative weight of the evidence is enough first to justify, and then to rebut, a protective policy. Only after the rebuttal criteria are clarified will the contributions of science to regulation become more transparent and accessible.



Copyright © 2003 by Wendy E. Wagner
* Joe A. Worsham Centennial Professor of Law, University of Texas School of Law.
I am most grateful to John Applegate, Rob Mays, and Sid Shapiro and the participants in this Symposium for their suggestions and comments on an early draft of this paper. Thanks also to Natalie Asturi, Ashley Kever, Christopher Smith, and Hope Williams for helpful research assistance.
1. Pub. L. No. 105-277, 122 Stat. 2681 (1998); see infra text accompanying notes 19-23.
2. Pub. L. No. 106-554, § 515, 144 Stat. 2763 (2001); see infra text accompanying notes 24-28.
3. See infra text accompanying notes 29-30.
4. Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 593 (1993).
5. See id. at 593-94 (listing four factors that assist in determining what constitutes scientific knowledge).
6. For example, although science can provide information on the fatality rates in cities with different levels of fine particulates, the effects of fine particulates on the lungs of laboratory rats, and even possible physiological mechanisms by which fine particulates might cause severe health effects like arrhythmia, this accumulated scientific knowledge still falls quite short of revealing the concentration at which fine, air-borne particulates will cause fatal effects in only one out of every one million persons. See generally Jocelyn Kaiser, Showdown over Clean Air Science, 277 SCIENCE 466 (1997) (detailing the debate over whether available scientific studies can provide sufficient information to determine a "safe" level of particulates in ambient air).
7. These extrapolatory questions can certainly be informed by science. For example, recent discoveries in toxicology are refining the ability to predict the shape of the dose-response curve for carcinogenesis in specific categories of toxins. See, e.g., Marvin Goldman, Cancer Risk of Low-Level Exposure, 271 SCIENCE 1821 (1996) (discussing research that suggests cancer does not follow a linear dose-response curve and that a zero-threshold assumption might no longer be supported by the evidence, at least in some cases). But, in most cases, an unverifiable extrapolation must be made between mechanistic theories and the exposure level at which humans are unlikely to be affected. Even this two-dimensional zigzag oversimplifies the fluidity of policy and science. For example, deciding which data sets or studies to select or analyze (for the points above the line) also constitutes a mixed science-policy decision. For a more extended discussion of this zigzag of science and policy, see Wendy E. Wagner, The Science Charade in Toxic Regulation, 95 COLUM. L. REV. 1613, 1622-27 (1995).
8. See TED GREENWOOD, KNOWLEDGE AND DISCRETION IN GOVERNMENT REGULATION 15 (1984) ("[T]he charge has frequently been made that agencies have low scientific competence."). See generally infra Part IV.A.
9. Isolating these imperfections in research is not a simple matter and requires cooperation from the original researchers, as well as significant resources for validating or reproducing research studies. See, e.g., NAT'L ACAD. OF SCI., ENSURING THE QUALITY OF DATA DISSEMINATED BY THE FEDERAL GOVERNMENT: WORKSHOP # 2, at 9-20 (March 22, 2002), available at [hereinafter NAS, DATA QUALITY TRANSCRIPT, DAY 2] (presentation by Robert O'Keefe, Health Effects Institute) (describing the involved process for reviewing studies and data, and the time and resources that this process requires).
10. See, e.g., Letter from Jim J. Tozzi, Member, Board of Advisors, Center for Regulatory Effectiveness, to Hon. Stephen L. Johnson, Assistant Administrator for Prevention, Pesticides and Toxic Substances, U.S. Envtl. Protection Agency (May 10, 2002), available at [hereinafter Tozzi Letter] (arguing that EPA cannot bar from consideration in its risk assessments "third-party" human volunteer clinical studies, and that this refusal to consider the "best available evidence" violates OMB's Data Quality Guidelines).
11. See, e.g., 5 U.S.C. § 552 (2000) (requiring agencies to make their opinions, rules, records, and public information publicly available); 5 U.S.C. § 553(c) (2000) (requiring the opportunity for public comment on rulemakings); 5 U.S.C. § 706(2)(A) (2000) (instructing that the reviewing court will set aside any agency actions, findings, or conclusions found to be arbitrary and capricious based on the administrative record).
12. At least one sociologist of science has noted the efforts of scientists to recharacterize the demarcation between questions of science and non-science to prevent religious, institutional, and governmental intrusions into their scientific provinces or to further the scientists' "pursuit of authority and material resources." Thomas F. Gieryn, Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists, 48 AM. SOC. REV. 781, 782, 793 (1983).
13. See generally Wagner, supra note 7, at 1628-50 (providing examples of agencies making various policy choices that appear scientifically ordained). The National Research Council has suggested that EPA's failure to make science and policy choices explicit in its cryptic preambulatory explanations might not only prevent the agency from making accessible and consistent policy decisions, but could "undercut the scientific credibility of the agency's risk assessments." COMM. ON RISK ASSESSMENT OF HAZARDOUS AIR POLLUTANTS, NAT'L RESEARCH COUNCIL, SCIENCE AND JUDGMENT IN RISK ASSESSMENT 105 (1994); see also id. at 137 (stating that when "[t]he predictive accuracy and uncertainty of the methods and models used for risk assessment are not clearly understood or fully disclosed" by EPA, the agency evades critical but necessary scientific review both from within and outside the agency).
14. See infra notes 126-29 and accompanying text.
15. As discussed below, the good-science reforms do nothing to address these research deficiencies. See infra Part IV.A.1. For a discussion of problems with EPA's prioritization for scientific research, see MARK R. POWELL, SCIENCE AT EPA: INFORMATION IN THE REGULATORY PROCESS 112-17 (1999).
16. See, e.g., Mary L. Lyndon, Information Economics and Chemical Toxicity: Designing Laws to Produce and Use Data, 87 MICH. L. REV. 1795, 1825-27 (1989) (observing that under some circumstances, existing regulatory programs can create incentives for manufacturers to remain ignorant rather than invest in developing information on the long-term safety of their products).
17. See, e.g., ROBERT W. ADLER ET AL., THE CLEAN WATER ACT TWENTY YEARS LATER 33 (1993) (observing that the "[l]ack of federal leadership has resulted in the complete absence of monitoring in some states and in substantial variations in testing methods and closure standards"); STEERING COMM. ON IDENTIFICATION OF TOXIC & POTENTIALLY TOXIC CHEMICALS FOR CONSIDERATION BY THE NAT'L TOXICOLOGY PROGRAM, NAT'L RESEARCH COUNCIL, TOXICITY TESTING: STRATEGIES TO DETERMINE NEEDS AND PRIORITIES 48, 84-85 (1984) (listing a number of chemicals in various categories, seventy-four percent (over 48,000) of which are identified as largely unregulated "chemicals in commerce," and observing that because of more limited regulation, toxicity testing is more sporadic on this large subset of chemicals).
18. In justifying the Data Access Amendment (the "Shelby Amendment"), Senator Shelby stated: "Public confidence in the accuracy and reliability of information being used to drive public policy ultimately is in the best interest of scientific research. Increasing access to such data promotes the transparency and accountability that is essential to building public trust in government actions and decision-making." Richard Shelby, Accountability and Transparency: Public Access to Federally Funded Research Data, 37 HARV. J. ON LEGIS. 371, 379 (2000).
19. The Shelby Amendment was passed as a rider to the Omnibus Consolidated and Emergency Appropriations Act for Fiscal Year 1999, Pub. L. No. 105-277, 112 Stat. 2681 (1998), and requires OMB to amend Circular A-110 to require "[f]ederal awarding agencies to ensure that all data produced under an award will be made available to the public through the procedures established under the Freedom of Information Act."
20. Senator Shelby, the author of the Amendment, maintains that there were floor discussions of the legislation that took place before the requirement became part of a voluminous appropriations bill. See Shelby, supra note 18, at 378-79. He also recounts efforts to amend or suspend the bill. Id. at 380. He does not, however, explain why it was passed as a rider to an appropriations bill rather than as stand-alone legislation.
21. Id. at 389 (concluding that the Shelby Amendment and "[t]he final revision to Circular A-110 are crucial steps forward in giving the American people access to the research and science behind federal policies and rules").
22. See Office of Mgmt. & Budget, Circular A-110, "Uniform Administrative Requirements for Grants and Agreements with Institutions of Higher Education, Hospitals, and Other Non-Profit Organizations," 64 Fed. Reg. 54,926, 54,926 (1999) [hereinafter Circular A-110] (announcing the final guidelines and discussing the comments received on the draft guidelines).
23. See id. at 54,930; see also Shelby, supra note 18, at 383-89 (discussing with concern the various ways in which OMB narrowly interpreted the Shelby Amendment to limit its reach).
24. Treasury and General Government Appropriations Act for Fiscal Year 2001 § 515, Pub. L. No. 106-554, 144 Stat. 2763 (2001). From the oral history surrounding its passage, it appears that most Members of Congress were unaware of the Act's content or existence. See, e.g., NAT'L ACAD. OF SCI., ENSURING THE QUALITY OF DATA DISSEMINATED BY THE FEDERAL GOVERNMENT: WORKSHOP # 1, at 32 (April 21, 2002), available at [hereinafter NAS, DATA QUALITY TRANSCRIPT, DAY 1] (comments of Alan Morrison) (stating that the Data Quality Act "came up as part of a very large appropriations act that most people didn't even know contained this particular piece of legislation in it"). It also appears from the oral history that it was an industrial lobbyist and not a congressional staffer that drafted and guided the rider through Congress. See, e.g., James T. O'Reilly, The 411 on 515: How OIRA's Expanded Information Roles in 2002 Will Impact Rule-Making and Agency Publicity Actions, 54 ADMIN. L. REV. 835, 840 n.20 (2002) ("Discussion at the American Bar Association Fall Administrative Law Conference dinner . . . honoring past directors of the OIRA, suggested that Jim Tozzi, former OIRA director, had been the principal drafter of the 515 language.").
25. The mandated OMB Guidelines interpreting the DQA were promulgated as final in February 2002. Office of Mgmt. & Budget, Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by Federal Agencies; Republication, 67 Fed. Reg. 8452 (Feb. 2, 2002) [hereinafter "Data Quality Guidelines"]. The Act also requires agencies to promulgate their own guidelines to ensure compliance with the Act and OMB's guidelines. See Pub. L. No. 106-554, § 515(b)(2)(A), 144 Stat. 2763 (2001).
26. See, e.g., Data Quality Guidelines, supra note 25, at 8452 ("OMB guidelines apply stricter quality standards to the dissemination of information that is considered 'influential.'"). If the information disseminated by an agency is "influential," the agency must "generally require sufficient transparency about data and methods that an independent reanalysis could be undertaken by a qualified member of the public." Id. at 8460.
27. See id. at 8459 (providing that "agencies shall establish administrative mechanisms allowing affected persons to seek and obtain, where appropriate, timely correction of information maintained and disseminated by the agency that does not comply with OMB or agency guidelines"). Unlike OMB's guidelines implementing the Data Access Amendment, OMB might have actually enlarged the reach of the Data Quality Act through its Guidelines. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 133 (comments of Dan Cohen) (observing that OMB added a substantive appeal process in its Guidelines that was not required in the original Data Quality Act).
28. See, e.g., Flue-Cured Tobacco Coop. Stabilization Corp. v. EPA, 313 F.3d 852, 861 (4th Cir. 2002) (holding that, to be reviewable as "final agency action" under the Administrative Procedure Act ("APA"), information disseminated by an agency must have "direct consequences" for the challenger); infra note 193. There is uncertainty regarding whether DQA complaints will themselves be reviewable before a final rule is promulgated, although a plausible argument has been made that disputed information is reviewable regardless of whether there has been a final agency action. Professor O'Reilly observes, for example, that,
[Section 515] is not an amendment of the [Paperwork Reduction Act ("PRA")], so its mechanisms are not inhibited by the PRA's ban on judicial review of decisions to approve collections of information[. R]ather, the 515 mechanisms are like those of the normal APA adjudication of petitioners' claims about an agency decision, evoking the 1979 Chrysler v. Brown issues on the roughly parallel set of 'reverse-Freedom of Information Act' disputes.
Jim O'Reilly, Biting the Data Quality Bullet: Burdens on Federal Data Managers Under New Section 515, ADMIN. & REG. L. NEWS (ABA, Washington, D.C.), Summer 2002, at 2; see also NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 22-23 (comments of John D. Graham) (noting the uncertainty of judicial review and speculating that "it will probably take a few critical court decisions before we know how this law and the associated guidelines will be interpreted by judges"); id. at 73-74 (comments of Alan Morrison) (speculating that under the Data Quality Act, courts will not hold "de novo review" of the science even though it is "theoretically possible" that they could); id. at 114-17 (comments of Fred Anderson) (speculating that parties will be able to get judicial review of agency information independent from a final rulemaking); id. at 143-44 (comments of Dan Cohen) (concluding that an agency's ruling on a correction request is a final agency action subject to judicial review); id. at 173-74, 181-83 (comments of Professor Pierce) (expressing initial skepticism about whether courts can review challenges to agency information, and then later conceding that judicial review might be possible under limited circumstances). Regardless, it seems clear that complaints filed against information that forms the basis of a rulemaking will be evidence against that rulemaking if the rule is challenged under the APA. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 69 (comments of Alan Morrison) (suggesting that the Data Quality Act "surely cannot change the substantive law standards. But it may provide some additional guidance to the courts"); O'Reilly, supra note 24, at 850 (arguing that "a section 515 dispute [might possibly] affect the general deference of courts toward the factual aspects of agency rule-making"). Commentators disagree over whether these data quality complaints will differ in any meaningful way from critiques of agency data that are embedded in comments during the notice and comment process. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 228-32 (comments of Fred Anderson and Professor Richard Pierce) (disagreeing over whether the new DQA procedures will involve a "new whole universe of verification" or instead constitute a minor change in existing processes for using science).
29. There are several proposals for some form of "regulatory Daubert" in the literature. See, e.g., E. Donald Elliott, Alan Charles Raul, Richard J. Pierce Jr., Thomas O. McGarity, & Wendy E. Wagner, Dialogue, Science, Agencies, and the Courts: Is Three a Crowd?, 31 Envtl. L. Rep. (Envtl. L. Inst.) 10,125, 10,129-32 (Jan. 2001) (comments of Alan Raul) (sketching out the parameters of a proposal for "importing Daubert-type principles into judicial review under the APA"); Paul S. Miller & Bert W. Rein, "Gatekeeping" Agency Reliance on Scientific and Technical Materials after Daubert: Ensuing Relevance and Reliability in the Administrative Process, 17 TOURO L. REV. 297, 297 (2000) (arguing that the principles in Daubert require federal courts reviewing administrative actions to enforce the same "gatekeeper" standards as those courts now require when reviewing a trial court's treatment of scientific and technical evidence); Alan Charles Raul & Julie Zampa Dwyer, "Regulatory Daubert": A Proposal to Enhance Judicial Review of Agency Science by Incorporating Daubert Principles into Administrative Law, 66 LAW & CONTEMP. PROBS. 7, 8 (Autumn 2003) ("The fundamental goal of 'regulatory Daubert' is, quite simply, to encourage reviewing judges to be less deferential, and thus more probing, of agency science and related administrative justifications for regulatory action."); D. Hiep Truong, Daubert and Judicial Review: How Does an Administrative Agency Distinguish Valid Science from Junk Science?, 33 AKRON L. REV. 365, 370 (examining "the possibility of using the Daubert standard to effectuate a more meaningful judicial review of an agency's determination of risks" and arguing that "[b]y using the Daubert standards, a reviewing court is simply treating an agency like a testifying expert"); Charles D. Weller & David B. Graham, New Approaches to Environmental Law and Agency Regulation: The Daubert Litigation Approach, 30 Envtl. L. Rep. (Envtl. L. Inst.) 10,557, 10,566-72 (July 2000) (providing detailed recommendations for how the courts can incorporate Daubert into their review of agency science, including using Daubert hearings for the review of certain agency actions); Andrew Trask, Comment, Daubert and the EPA: An Evidentiary Approach to Reviewing Agency Determinations of Risk, 1997 U. CHI. LEGAL F. 569, 569 (1997) (arguing that "applying the Daubert standard to judicial review of agency determinations would create an important check on agency decisionmaking, while still allowing EPA the discretion it requires to make effective environmental policy").
30. Raul and Dwyer have suggested that the regulatory-Daubert standard would require courts to defer to agency decisions only if "the agency used methodologies and procedures that were reliable and scientifically valid, the scientific evidence relied upon was relevant for the issues before the agency," and four other criteria were met. See Raul & Dwyer, supra note 29, at 26. This would require the courts to make a threshold, Daubert-like decision about the quality of the agency's science. For more radical proposals that include use of a Daubert hearing to review some agency actions, see Weller & Graham, supra note 29, at 10,568.
31. 206 F.3d 1286 (D.C. Cir. 2000) (vacating EPA's zero "maximum containment level goal" ("MCLG") standard for chlorine in drinking water because it was "arbitrary and capricious" and in excess of statutory authority since it did not account for the "best available evidence," which suggested a non-zero safe level).
32. U.S. Envtl. Protection Agency, National Primary Drinking Water Regulations; Arsenic and Clarifications to Compliance and New Source Contaminants Monitoring, 66 Fed. Reg. 20,580, 20,581 (Apr. 23, 2001) (withdrawing the Clinton Administration's "maximum contaminant level" ("MCL") standard for drinking water of 10 parts per billion and delaying promulgation of a new rule "to allow additional time for review of the science and costing analysis underlying the arsenic in drinking water rule. . . [since] [f]rom an economic standpoint, the new regulation can be expected to have significant impacts on a number of drinking water utilities, especially those serving fewer than 10,000 people in areas of high naturally occurring arsenic").
33. Chlorine Chemistry, 206 F.3d at 1290-91 (concluding that since "[t]he statute requires the agency to take into account the 'best available' evidence[,] EPA cannot reject the 'best available' evidence simply because of the possibility of contradiction in the future by evidence unavailable at the time of action-a possibility that will always be present"); Peter Waldman, Dangerous Waters: All Agree Arsenic Kills; The Question Is How Much It Takes to Do So, WALL ST. J., Apr. 19, 2001, at A1 ("'At the very last minute, my predecessor made a decision to lower arsenic standards,' said President George W. Bush last month. 'We pulled back his decision, so that we can make a decision based upon sound science.'").
34. In commenting on the Data Quality Act at an NAS workshop, Professor Pierce observed:
[W]hat we know for sure is that the guidelines will have many more unintended effects than intended effects. That is true of every regulatory system that has ever been put in place and every deregulatory system that has ever been put in place. And there is a very good chance that the unintended effects will also be far more important that the intended effects.
NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 175.
35. Cf. Antonin Scalia, The Freedom of Information Act Has No Clothes, REGULATION, Mar.-Apr. 1982, at 14 (discussing FOIA's unintended consequences).
36. See infra notes 54-64 and accompanying text. This trend of "reforming" undefined problems continues. Legislative efforts to create a new deputy administrator for science and technology at EPA, see, e.g., H.R. 64, 107th Cong. (2000), also fail to point to science-quality problems at the agency, although there might be good, unarticulated reasons for creating this type of position.
37. See Office of Mgmt. & Budget, Draft Report to Congress on the Costs and Benefits of Federal Regulations, 67 Fed. Reg. 15,014, 15,023 (Mar. 28, 2002) [hereinafter OMB, Draft Cost-Benefit Report] (reporting that there are "roughly 4500 regulatory actions that occur on average each year" and that of the small subset that are considered "economically significant," about seventy percent are promulgated by three agencies-the Department of Health and Human Services, the Department of Agriculture, and EPA).
38. For the purposes of this discussion, "unreliable science" is broadly defined to include studies for which data has been fabricated or is inaccessible for unjustified reasons, or for which flawed methods or bias make the study of little value. In testimony before a House "strengthening science" hearing held in 2000, one of the experts, Robert Huggett, a former assistant administrator at EPA's Office of Research and Development, testified that he "saw at no time in [his] tenure at EPA [an instance] where politics actually changed a scientific finding." Strengthening Science at the U.S. Environmental Protection Agency-National Research Council (NRC) Findings: Hearing Before the House Subcomm. on Energy and Env't, 106th Cong., 106-97, at 46 (2000) [hereinafter Strengthening Science Hearing]. When pressed by Representative Ken Calvert, Huggett also denied that there was a perceived or actual problem with the quality of EPA's science. He concluded that, to the extent there is such a perception: "I have heard that more in this House than any other place." Id. at 47. The other invited expert, Dr. Morrison, a member of the NAS panel that produced the Strengthening Science report, denied that EPA's science is biased or of poor quality: "I think there may be disagreements between the regulatee and the regulator as to what the right science is but I don't think there is any question about the quality." Id. at 47; see also Paul A. Locke, Legal Answer to a Scientific Question, ENVTL. F., Nov./Dec. 2000, at 60 (observing that although "[t]here is widespread belief that many of the problems surrounding law-science interactions are attributable to junk science[,] [t]his hypothesis is largely unexamined . . . [,] [and] there is no empirical evidence to support this conclusion").
39. See, e.g., DAN FAGIN & MARIANNE LAVELLE, TOXIC DECEPTION: HOW THE CHEMICAL INDUSTRY MANIPULATES SCIENCE, BENDS THE LAW, AND ENDANGERS YOUR HEALTH 33-50 (1996) (discussing evidence of fraud and bias in industry-conducted and industry-sponsored studies on the safety of substances); Thomas O. McGarity, Beyond Buckman: Wrongful Manipulation of the Regulatory Process in the Law of Torts, 41 WASHBURN L.J. 549, 559-63 (2002) (detailing incidents in which data required to be submitted by manufacturers or their contractors under the Federal Fungicide, Insecticide, and Rodenticide Act ("FIFRA") and the Food, Drug, and Cosmetic Act ("FDCA") were either withheld or were misleading or fraudulent); cf. SHELDON KRIMSKY, SCIENCE IN THE PRIVATE INTEREST: HAS THE LURE OF PROFITS CORRUPTED THE VIRTURE OF BIOMEDICAL RESEARCH? (2003) (discussing this problem throughout the book and providing considerable support).
40. The only clear example of bad science used in a rulemaking was EPA's conscious but indisputably wrong decision to not adjust its model for assessing exposure to potentially hazardous air pollutants around the properties of particular pollutants. See Chem. Mfrs. Ass'n v. EPA, 28 F.3d 1259, 1264 (D.C. Cir. 1994). See generally SHEILA JASANOFF, THE FIFTH BRANCH: SCIENCE ADVISERS AS POLICYMAKERS 3-4, 20-38 (1990) (acknowledging the "perception" that agency science is bad, citing examples of scientific misconduct by contractors, industry, and the agencies, and concluding that, except for examples of scientific fraud, most of the "bad science" criticisms "had relatively little to do with the competence or incompetence of agency officials," but instead turned on whether the agency acted in a way that was overly precautionary).
41. Specifically, the expert panel found that "[a] perception exists that regulations based on unsound science have led to unneeded economic and social burdens," but that "[d]espite substantial efforts on the part of the agency, most people outside EPA do not recognize that EPA has made strong science a priority." EXPERT PANEL ON THE ROLE OF SCIENCE AT EPA, U.S. ENVTL. PROTECTION AGENCY, SAFEGUARDING THE FUTURE: CREDIBLE SCIENCE, CREDIBLE DECISIONS 18 (1992) [hereinafter SAFEGUARDING THE FUTURE); see also id. at 24, 37 (noting a "perception" that science is adjusted to fit policy at EPA).
42. See id. at 24 (finding that "EPA, like many other scientific organizations, does not give sufficient attention to validating the models, scientific assumptions, and databases it uses") (emphasis added). In Chemical Manufacturers Ass'n, the substance in dispute, MDI, was a solid at room temperature, but EPA nevertheless persisted in modeling its dispersion using a generic air dispersion model that only applies to substances in a gaseous form. 28 F.3d at 1264.
43. See SAFEGUARDING THE FUTURE, supra note 41, at 4-9 (stating that science should enter the decisionmaking process earlier and more often). The body of the report contains more specific findings that support this general finding. In studying how EPA uses science in decisionmaking, the panel expressed concern that "EPA has not always ensured that contrasting, reputable scientific views are well-explored and well-documented from the beginning to the end of the regulatory process." Id. at 36. The panel also observed that "[s]cientists at all levels throughout EPA believe that the agency does not use their science effectively." Id. at 38.
45. All fifteen of the NAS recommendations were directed at one or more of these concerns. See id. at 4-16.
46. See, e.g., id. at 15-16. And while this report provided the impetus for subsequent legislative efforts to create a new position at EPA-a deputy administrator for science and technology-even the statements of purpose for these bills lack any discussion of science competency problems at EPA. See, e.g., H.R. 64, 107th Cong. (2000) (referring only to a general need to better coordinate science within EPA).
47. To the extent that any theme emerges with regard to science and regulation, it appears to be that EPA's management of its scientific research and internal communications is adequate, but not as strong as it could be. For example, the NAS report provides a detailed description of the studies conducted on EPA laboratories, grant programs, and in-house scientists. These assessments are positive on the quality of the science conducted by EPA, particularly at the laboratories. NAS, STRENGTHENING SCIENCE, supra note 44, at 58-60. Criticisms are directed primarily toward management issues arising from the reorganization of the Office of Research and Development and the retention of young talent. Id. at 58-81.
48. During a two-day workshop sponsored by the National Academy of Sciences, there was no discussion of either general or specific problems with the agency's science, with the possible exception of a discussion of some corrections to facility identifications and addresses under EPA's correction system. See NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24; NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9. At the "Strengthening Science" hearings held in the House in 2000-which were convened to discuss doubts about "EPA's commitment to sound science"-both of the invited experts testified that management and funding changes were the primary means of "strengthening science" at EPA. Both experts expressly disagreed with Representative Ken Calvert that science at EPA is politicized or unsound. See Strengthening Science Hearing, supra note 38, at 1, 4-43, 47.
49. See Strengthening Science Hearing, supra note 38.
50. Id.
51. Mark Powell provides several sources of evidence on the quality of science at EPA: surveys of persons in and outside the agency, literature reviews, and in-depth case studies. With the exception of small pockets of dissatisfaction or errors, Powell's detailed discussion provides little evidence of the agency using science that is of poor quality. Instead, his study suggests that the quality of EPA's science (not the prioritization of funding, the transparency of the agency's actions, and the like) is adequate to good. In response to surveys of persons within and outside the agency, for example, Powell found that only 8% of those rating EPA's science described it as "poor." Forty-four percent characterized EPA's science as "fair," 33% rated it as "good," and 14% rated it as "very good." POWELL, supra note 15, at 129. Respondents also indicated that the quality of EPA's science had improved over time. Id. ("Of those who supplied a rating, 47% of respondents rated EPA's current use of science as good to very good, compared with 31% ten years ago."); see id. at 131 (reporting continued improvements to EPA's science through the Browner administration); id. at 81, 85 (praising or quoting persons praising the quality of EPA's science). While some respondents did complain that EPA did not use science enough in its decisions, put in context with the other survey questions, Powell interprets this response in large part to reflect a frustration over the rampant, unaddressed scientific uncertainties endemic to environmental and public health regulation. Id. at 120. It might also result from frustration with congressional mandates, such as technology-based standards, which make science largely irrelevant to environmental regulation. Id. at 119 (reporting that eighteen out of twenty-six respondents say that mandates and regulatory approaches impede EPA's use of science). Powell's eight case studies of high-profile science-based regulatory analyses do provide evidence of isolated internal communication problems within the agency that at times seemed significant, even though they were ultimately corrected. But the case studies, taken as a whole, further reinforce the impression that EPA does a competent job of integrating and evaluating available science into its decisionmaking and that much of the turmoil comes in integrating indecisive scientific information into policy decisions about the appropriate stringency of regulation. See, e.g., id. at 178-79 (1991 lead/copper drinking water rule); id. at 220-24 (1995 decision not to revise arsenic in drinking water standard); id. at 257-58 (1987 revision of the NAAQS for particulates); id. at 278-29 (1993 decision not to revise the NAAQS for ozone); id. at 296-97 (1983-1984 suspension of EDB under FIFRA); id. at 317-18 (1989 asbestos ban and phaseout rule under the Toxic Substances Control Act ("TSCA")); id. at 364-67 (control of dioxins under the Clean Water Act ("CWA")); id. at 387-88 (Superfund cleanups of lead at mining sites).
52. See GREENWOOD, supra note 8, at 15 (concluding based on detailed examination of OSHA and EPA that, contrary to common perception, these agencies are not scientifically incompetent, and attributing any regulatory inadequacies to the institutional difficulties of dealing with uncertainty); JASANOFF, supra note 40, at 246 (positing that the claims of bad science are misplaced and that controversies over the quality of agency science arise when there is an adversarial atmosphere that destroys any opportunity to form a consensus on appropriate science, especially how to resolve uncertainties); BRUCE L. R. SMITH, THE ADVISORS: SCIENTISTS IN THE POLICY PROCESS 100 (1992) (concluding after a detailed study of EPA's Science Advisory Board ("SAB") that "[i]ts main contribution has been not to provide novel and independent advice, but to provide validation and support to the agency's leaders as they have sought to foster a new culture and bureaucratic outlook").
53. The agencies' lack of transparency was a common theme, especially their overstating the technical bases for policy choices or providing insufficient characterizations of uncertainty. See POWELL, supra note 15, at 33 (relaying industry criticism of EPA's Integrated Risk Information System ("IRIS") database because it did not provide sufficient characterization of uncertainties). More subtle problems that emerged consisted of elongated and imperfect chains of communication for communicating available science from EPA staff to the administrator. See supra note 41 and accompanying text.
54. See Shelby, supra note 18, at 375-76 (justifying the Data Access Amendment on the need to provide public access to data used in the Six Cities Study, much of which was confidential, and citing delayed release of some studies and an error in a 1986 study on the herbicide 2,4-D by the National Cancer Institute that was subsequently corrected). These limited examples provided by Senator Shelby involve disputes over an agency's policy decisions, such as when or whether to release a government study, rather than its scientific competency. See id. (describing how delays in publishing studies on the effects of Agent Orange and radioactive testing affected veterans and the public). For an expanded discussion of the Six Cities Study (an epidemiological study showing a correlation between morbidity and the concentration of fine particulates in ambient air in six cities), as well as the various confidentiality problems with granting access to the data underlying the Six Cities Study, see NATIONAL RESEARCH COUNCIL, ACCESS TO RESEARCH DATA IN THE 21ST CENTURY 10-11 (2002) [hereinafter NRC, DATA ACCESS REPORT]. The only example provided by Senator Shelby that has bearing on agency science competency concerns an epidemiological study conducted by the National Cancer Institute on the effects of the herbicide 2,4-D. The study linked 2,4-D use with cancer, but used survey techniques that were unable to distinguish the effects of 2,4-D from those of other herbicides used by study participants. As Senator Shelby notes, industry quickly pointed out the error, and the agency corrected it. The incident does, however, provide an example of misreported results. Shelby, supra note 18, at 375-76; see also infra note 60 and accompanying text (discussing the secondary literature supporting data access that provides additional, generally unhelpful examples, some of which Shelby cites but does not discuss).
55. See infra note 60 and accompanying text.
56. See Raul & Dwyer, supra note 29, at 9-13, 19-20 (providing a description of the "pervasive criticisms of EPA's science," most of which take issue with the agency's conservative policy choices, a problem discussed at Part V, infra); id. at 9 & n.12, 10 & n.15, 19-20, 34 (providing as the only concrete examples of "bad science" at EPA: (1) a study by a nonprofit criticizing EPA's scientific support for its regulation of PCBs; (2) a quote by Congressman Calvert in introducing hearings on "strengthening science" remarking on "many specific instances" of problems with EPA's science, even though Calvert himself supports this statement only with three unreferenced illustrations; and (3) at best four instances, two of which (Flue-Cured and Chlorine Chemistry) are contestable, in which courts reversed agency scientific determinations because the science was not reliable or the agency had under-utilized the available science). Raul and Dwyer also lament the dearth of science relevant to regulation. See, e.g., id. at 12, 44. This, however, is a concern that does not go to the quality of regulatory science but to the incentives for the production of new research. Most other regulatory-Daubert proponents openly concede that the target problem is EPA's over-regulation of industry. See, e.g., Truong, supra note 29, at 368 (candidly observing that the need for more rigorous, "Daubert-like" review "boils down to this: cost. . . . [R]isk-assessment based regulations run the risk of spinning out of control and wreaking havoc on the economy"); Weller & Graham, supra note 29, at 10,558-59 (focusing exclusively on the problem of "overprotection" and how standards are more stringent than they need to be); Trask, supra note 29, at 570 (identifying over-regulation, inconsistency between agencies in their policies for protective standards, the blurring of science and policy, and the need to act on uncertainty as reasons that justify the adoption of regulatory Daubert); see also Miller & Rein, supra note 29, at 297 (failing to identify current problems with agency science, even though the article concludes with a four-page draft executive order requiring agencies to implement "Daubert principles"). Only Weller and Graham provide any reference to problems with the quality of agency science, and, as discussed above, these particular justifications are not compelling.
57. The Center for Regulatory Effectiveness maintains a comprehensive website that appears to provide all of its public communications, lawsuits, media correspondence, and position papers. A relatively thorough search, however, revealed no examples of bad science in regulation. See The Center for Regulatory Effectiveness, at (last visited Sept. 5, 2003); see also Tozzi Letter, supra note 10.
58. See Data Quality Guidelines, supra note 25, § IV.6.
59. See, at (last visited Sept. 5, 2003).
60. The first, and most prevalent, charge against the quality of agency science is that there is not enough science to justify protective regulation. (Unfortunately, the authors' standards for the requisite evidence are never articulated, and there are no citations to statutes. This is a recurring theme in the good-science literature. See supra note 56). For example, Senator Inhofe lists what he considers the "ten worst regulations." Of those, six focus on the adequacy of the scientific support for regulations and conclude that that the agency was too precautionary given the limited scientific evidence supporting the regulation. Inhofe argues that: (1) EPA tailpipe emissions are too stringent because the epidemiological study upon which the limits were based is statistics, not science; (2) EPA should have raised the chloroform MCLG standard under the Safe Drinking Water Act ("SDWA") because a study suggests that there is a threshold dose above zero that is safe; (3) evidence linking hypoxia in the Gulf of Mexico to upstream fertilizer use in the Midwest is not adequately supported to justify regulating fertilizer use; (4) EPA's decision to regulate agricultural biotechnology products derived from gene splicing is costly and overly protective; (5) regulators' assignment of blame to poultry farmers for fish kills in the Chesapeake Bay was not supported by sufficient evidence; (6) EPA should take into account the effects of degradation by microorganisms in its regulation of solid and hazardous wastes. Senator James Inhofe et al., Big Government and Bad Science, Nov. 30, 1999, at

Most of the charges of "bad science" are brought by the self-proclaimed "junkman," Steven Milloy. Despite the misleading title of "junk science," however, Milloy's critiques almost uniformly take issue with protective values rather than the quality of agency science. See, e.g., Michael Gough & Steven Milloy, Cato Policy Analysis No. 366, The Case for Public Access to Federally Funded Research Data, Feb. 2, 2000, at 10-11, available at, (criticizing the National Institutes of Health's ("NIH") use of bad science in the regulation of IUDs, apparently based on the lack of definitive evidence of cause and effect and the weak statistical correlations in the studies); Steven Milloy, Is the FDA's PPA Scare BS?, Nov. 10, 2000, available at (criticizing the FDA's decision to remove Phenylpropanolamine ("PPA") from cold medicine, appetite suppressants, and other drug products as alarmist since extrapolating population risks from a pivotal Yale study on PPA involved controverted policy judgments and, in any case, "[the] level of risk [one stroke per 107,000 to 3,268,000 women] is so small that it's essentially immeasurable in the real world"); Steven Milloy, IV-Bag Scare Drips Junk Science, July 19, 2002, at,2933,58115,00.html (criticizing a Food and Drug Administration ("FDA") warning concerning intravenous bags and tubes made of di-2ethylhexylphthalate ("DEHP") because this "precautionary" warning was based only on studies of adverse effects on rodents); Steven Milloy, The Fat Police Indict Margarine, July 12, 2002, available at,2933,57486,00.html (criticizing the FDA's decision to require the labeling of foods containing trans fats as unduly precautionary since the only evidence consisted of laboratory and clinical studies showing that these fats increase cholesterol levels, not heart disease); id. (making the same criticism of overly protectionist action by the FDA based on incomplete evidence of the hazards of fen-phen and laxative products containing phenolphthalein); see also STEVEN MILLOY, SCIENCE WITHOUT SENSE: THE RISKY BUSINESS OF PUBLIC HEALTH RESEARCH 1, 29-31, 35-36 (1995) (criticizing EPA's precautionary risk assessments for Superfund, environmental tobacco smoke, and radon, but failing to provide supporting scientific or legal arguments).

In keeping with his attack on the agencies' policy choices, Milloy and his co-authors also periodically disagree with the agencies' weight-of-the-evidence conclusions, although they fail to provide a balanced account of what the totality of the evidence actually was in these rulemakings. See, e.g., J. Gordon Edwards & Steven Milloy, 100 Things You Should Know About DDT, 1999, available at (providing a lengthy list of selected findings and quotations from the scientific and general literature in arguing that DDT is not harmful to humans); Steven Milloy, When Environmental and Political Science Clash, Dec. 7, 2001, available at,2933,40256,00.html (arguing that former EPA administrator Whitman's decision to dredge PCBs from the Hudson River was not based on "sound science," but not offering any support for what that "sound science" is in the case of PCB-contaminated sediments).

Finally, Milloy and his colleagues condemn the use of laboratory, clinical, and epidemiological studies as the basis for regulatory decisionmaking, yet provide no discussion of the types of science they would find acceptable to support regulation (and based on their conclusions, it appears no research would be satisfactory). See, e.g., Steven Milloy, A Win for West Nile-By Two Rats, May 12, 2000, available at (quoting EPA's conclusion that there is "suggestive evidence of carcinogenicity [of malathion] but not sufficient to assess human carcinogenic potential," and arguing that EPA's risk assessment is junk science because "[t]here is no evidence malathion causes cancer in humans; but, as it turns out, two laboratory rats more than expected got cancer when given unrealistically high doses of malathion"); Steven Milloy, EPA Lung Cancer Study Based on Faulty Data, Mar. 8, 2002, available at,2933,47372,00.html (arguing that a recent epidemiological study published in the Journal of the American Medical Association that linked exposure to particulates to lung cancer could not be relied upon because "researchers couldn't possibly draw the conclusion that fine particulates cause lung cancer from this study because it was only statistical in nature, not scientific."); Steven J. Milloy & Micheal Gough, The EPA's Clean Air-ogance, Jan. 7, 1997, available at (arguing that the epidemiological studies underlying EPA's particulate standard for the NAAQS were merely statistical and did not show a "real association"). But see id. at 8-10 (arguing that the herbicide 2,4-D is not carcinogenic because studies were unable to detect a statistical increase in risk related to 2,4-D exposure separate from the exposure to other substances). Milloy and his co-authors are also intolerant of scientific errors that are conceded and corrected by agencies, even when those corrections appear to be made in a timely fashion. See, e.g., id. at 3-5, 5-8.
61. See, e.g., Shelby, supra note 18, at 376 (stating that Senator Shelby's inability to get the Six Cities data from EPA led him to sponsor the Data Access Amendment).
62. NRC, DATA ACCESS REPORT, supra note 54, at 8-12.
63. Id. at 10-11.
64. Id. at 12.
65. For example, even twenty years ago, a single "pre-manufacturer notification" rule proposed by EPA under the Toxic Substances Control Act (which would require manufacturers to notify EPA prior to large-scale production of toxic substances) "brought forth 192 commentors at 29 public meetings, and 300 pages of comment raising 40 discrete issues, each requiring an EPA response. [EPA's] defense comprised 300 pages of response, 800 pages of economic analysis performed at a cost of $600,000, and 500 pages of related analysis on regulatory impact." Douglas M. Costle, Brave New Chemical: The Future Regulatory History of Phlogiston, 33 ADMIN. L. REV. 195, 199 (1981).
66. Science advisory boards are mandatory for EPA's promulgation of air quality standards and for regulatory action on pesticides. See 7 U.S.C. § 136w(d)-(e) (2000) (requiring the scientific advisory panel established under FIFRA to review the scientific basis for major regulatory proposals concerning pesticides and to adopt peer-review procedures for scientific studies carried out pursuant to FIFRA); 42 U.S.C. § 7409(d)(2)(B)-(C) (2000) (establishing the Clean Air Scientific Advisory Committee ("CASAC") to review EPA's ambient air quality standards). The Consumer Product Safety Commission ("CPSC") and the FDA must also submit to mandatory peer review, although the FDA need do so only for its review of medical devices. See 15 U.S.C. § 2080(b) (2000) (requiring mandatory peer review by the legislatively created Chronic Hazard Advisory Panel); see also 42 U.S.C. § 4365(c)(1) (2000) (establishing a science advisory board to review scientific and technical information relevant to any proposed action under EPA's authority if EPA is forwarding the proposal to any other federal agency for formal review). The FDA, EPA, and OSHA each has advisory bodies available to them for various regulatory activities, but the agencies are not required to seek their assistance. See, e.g., 42 U.S.C. § 4365 (2000) (creating a Science Advisory Board to assist EPA in its research initiatives and science-based regulatory determinations). EPA's Science Advisory Board ("SAB") has played an increasingly influential role in reviewing the agency's science.
In 1989, SAB estimated that 50% of EPA's major activities in one form or another are debated, reviewed, or influenced by SAB. Currently, SAB and CASAC's ten combined committees include some 100 members augmented by 300 ad hoc consultants and hold approximately fifty meetings and publish about thirty reports each year.
POWELL, supra note 15, at 40 (citations omitted). See generally JASANOFF, supra note 40, at 61-83 (discussing peer review of agency science and highlighting the role played by science advisory boards); infra notes 78-81 (discussing other mechanisms in place to provide for peer review of agency science).

The agencies have also developed, without legislative direction, a variety of peer review and science consensus panels. The NIH, for example, has developed an innovative expert panel to reach consensus on issues of medical import. See NRC, DATA ACCESS REPORT, supra note 54, at 24. EPA's use of the Health Effects Institute ("HEI") to review the infamous Six Cities Study is also an example of agencies employing expert panels to address individual scientific controversies. See id. at 11-12 (discussing EPA's role in enlisting the HEI to re-analyze the data in the Six Cities Study).
67. See JASANOFF, supra note 40, at 206 (observing that "[p]erhaps the clearest lesson that emerged" is how science advice to the agencies through consensual advisory boards seems essential to certifying the agencies' scientific conclusions and protecting them from adversarial deconstruction); SMITH, supra note 52, at 71 (concluding that EPA's Science Advisory Panel commissioned under FIFRA "has served a useful purpose in enhancing the quality of internal EPA reviews and in bolstering the agency's public image as a scientifically credible regulator"); id. at 98 (concluding the same for EPA's SAB); see also Lars Noah, Scientific "Republicanism:" Expert Peer Review and the Quest for Regulatory Deliberation, 49 EMORY L.J. 1033, 1047-57 (2000) (discussing the varied types of peer review used by the agencies, their generally positive impact on agency science, and detailing how EPA, the FDA, and the CPSC utilize these various peer-review mechanisms).

In some case studies, researchers also found evidence that science advisory boards assisted in correcting errors or more generally improving the agency's scientific standards and the rigor of its analysis. See, e.g., JASANOFF, supra note 40, at 238 (noting how "external advisory panels called attention to the absence of contemporaneous controls in the Love Canal study and to the failure to record times and levels of exposure in the Alsea basin study of 2,4,5-T"); id. at 241 (noting that "conscientious scientific review by advisory panels certifies that the agency's scientific approach is balanced and rational and that its conclusions are sufficiently supported by the evidence"); see also POWELL, supra note 15, at 43 (reporting on how persons interviewed for the study on science at EPA "gave SAB and CASAC credit for improving EPA's acquisition and use of science").
68. SAFEGUARDING THE FUTURE, supra note 41, at 38.
69. A full committee report of the NAS can cost $500,000, and "[a]n NIH consensus development conference costs about $500,000 and takes approximately 1 year." NRC, DATA ACCESS REPORT, supra note 54, at 25. But see POWELL, supra note 15, at 40 (reporting that the SAB's budget for one fiscal year was "a modest $2.4 million").

The delays posed by peer review are perhaps still more significant for the rulemaking process. See, e.g., JASANOFF, supra note 40, at 207 (describing how in EPA's attempt to regulate formaldehyde under TSCA, protracted risk assessment and review by SAB produced a regulatory result which was so late in coming that significant regulatory control had already been passed to OSHA); POWELL, supra note 15, at 137 (reporting that "a 1994 SAB internal review found that six months often elapse between the last public meeting of an SAB panel and the transmittal of a report to the administrator"); SMITH, supra note 52, at 72 (reporting that under FIFRA, the Science Advisory Panel often delayed agency action without resolving the underlying issues); Thomas O. McGarity, Some Thoughts on "Deossifying" the Rule-making Process, 41 DUKE L.J. 1385, 1409-10 (1992) (documenting how use of scientific advisory boards often slows the pace of rulemakings).
70. See, e.g., JASANOFF, supra note 40, at 229 (observing that "[p]articipation by lay interests is limited and often one-sided, cross-examination is almost unknown, and committee recommendations, however much weight they carry, are seldom accompanied by detailed explanations or consideration of alternatives"); id. at 247 ("[S]ignificant policy decisions, particularly decisions not to act, may be reached after advisory deliberations that effectively engaged only one set of interests [that is, industry's]."); POWELL, supra note 15, at 139 (discussing how external peer review threatens to encroach into policy decisions); SMITH, supra note 52, at 65 (discussing how the advisory board for the Defense Department held biases that were hidden in the seemingly objective "selection of facts, the ordering and interpretation of evidence, and the assessment of the significance to be drawn from analysis, inquiries, or experimental tests"); id. at 74-75 (discussing similar biases on the part of EPA's Advisory Panel under FIFRA).
71. The most vivid account of this lopsided advisory process occurred during the early Reagan years when the Administration's "hit list" for marginalizing ninety scientists inside EPA was leaked to Congress. See, e.g., SMITH, supra note 52, at 92. See generally JASANOFF, supra note 40, at 122, 178 (observing in case studies on EPA, the CASAC, and the FDA that science advisors are able to offer advice on policy, as well as scientific issues); SMITH, supra note 52, at 192 (concluding in his study of science advisory boards with the conclusion that they consistently "struggle with the problem of ensuring that their membership is both representative and balanced"); Linda Greer & Rena Steinzor, Bad Science, ENVTL. F., Jan./Feb. 2002, at 28 (arguing that the selection of members for science advisory panels does not always provide balanced representation of varied interests and that this skews the results of the panels' deliberations); Jeff Johnson, "Reinventing" the EPA Science Advisory Board, 28 ENVTL. SCI. & TECH. 464 (1994) (reporting that EPA staff expressed concern about the diversity of members on SABs and potential conflict-of-interest problems).
72. See POWELL, supra note 15, at 40 (reporting that in 1987, SAB conducted seventy-seven reviews of EPA science and that "[f]rom 1978 to 1994, SAB and CASAC produced over 450 reports, advisories, letters, commentaries, and consultations") (citations omitted).
73. See, e.g., R. SHEP MELNICK, REGULATION AND THE COURTS: THE CASE OF THE CLEAN AIR ACT 356 (1983) (observing that reviewing courts have focused "their review on the quality of the scientific evidence supporting the EPA's standards"). Even the proponents of regulatory Daubert concede that for at least some cases in which the courts review the science underlying agency regulations, the courts apply a "Daubert Review Model" "implicitly." See Miller & Rein, supra note 29, at 316-18; see also Elliott et al., supra note 29, at 10,129 (comments of Alan Raul) (stating that a recent study of the D.C. Circuit, where judicial review is most rigorous, performed by Jonathan Adler shows that the "EPA is reversed frequently on scientific grounds, regardless of the courts' references to 'extreme' deference, on scientific questions. The agency is reversed in the D.C. Circuit a lot, and is not treated with kid gloves"). Several top administrative law scholars have similarly opined that the new requirements under the Data Quality Act might add little to the existing judicial review of agency science. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 225-26 (comments of Professor Richard Merrill) ("At one level it seems to me that for an agency to adopt the stance of the guidelines in its evaluation of comments is not really fundamentally different from what agencies are now expected to do under the APA, that is, to examine the probative weight and the reliability of the information that is submitted by any commenter."). But see Elliott et al., supra note 29, at 10,129-30 (comments of Alan Raul) (suggesting that judicial review is generally not rigorous and that Daubert is needed to improve judicial scrutiny of agency science).
74. See, e.g., Kenneth S. Abraham & Richard A. Merrill, Scientific Uncertainty in the Courts, ISSUES IN SCI. & TECH., Winter 1986, at 93, 99 (arguing that court review of agency science tends to be "unpredictable and uncontrollable"); see also NRC, DATA ACCESS REPORT, supra note 54, at 17 (recounting panelist David Hawkins' observations that the APA provides access for challenging the legitimacy of an agency's data and opining that if an agency fails to respond to requests for data under the APA, "a court can determine whether there is cause for action").
75. See, e.g., JOHN D. GRAHAM ET AL., IN SEARCH OF SAFETY: CHEMICALS AND CANCER RISK 151 (1988) ("Since the Supreme Court's 1980 benzene decision, federal agencies have felt compelled to use such numerical risk estimates to support both priority-setting and standard-setting decisions."); Frank B. Cross, Beyond Benzene: Establishing Principles for a Significance Threshold on Regulatable Risks of Cancer, 35 EMORY L.J. 1, 12-43 (1986) (arguing that judicial review forces agencies to provide detailed technical explanations for standards); Richard J. Pierce, Jr., Two Problems in Administrative Law: Political Polarity on the District of Columbia Circuit and Judicial Deterrence of Agency Rule-making, 1988 DUKE L.J. 300, 311 (arguing that courts often require "that agencies 'find' unfindable facts and support those findings with unattainable evidence").
76. See, e.g., Richard J. Lazarus, The Neglected Question of Congressional Oversight of EPA: Quis Custodiet Ipsos Custodes (Who Shall Watch the Watchers Themselves?), 54 LAW & CONTEMP. PROBS. 205, 206 (Autumn 1991) (concluding that "Congress appears to engage in more intense and pervasive oversight of EPA than it does of other agencies" and that "the character of [c]ongressional oversight of EPA appears to be consistently adversarial and negative").
77. The process for NAS review of agency science is discussed in an abbreviated form at NRC, DATA ACCESS REPORT, supra note 54, at 23.
78. These mechanisms appear to vary depending on the significance of the study under scrutiny. Routine studies generally require only one level of peer review to assess the rigor of the methods and logic of the study and interpretation. Studies that have greater implications for regulation but are still not expected to have a significant effect are reviewed internally and externally. Finally, studies that are highly influential, such as an assessment of the toxicity of dioxin or the effects of particulates on public health, receive the highest level of review, including several levels of both internal and external peer review.
79. See, e.g., U.S. Envtl. Protection Agency, Draft Data Quality Guidelines, 67 Fed. Reg. 21,234 (proposed Apr. 30, 2002) (describing four separate mechanisms in place to ensure the "quality, objectivity, utility and integrity" of information used and produced by EPA). Although EPA's peer-review system is still weak in some areas, especially in ensuring unbiased external review, it appears to provide important checks on the agency's science. See, e.g., U.S. GENERAL ACCOUNTING OFFICE, GAO/RCED-96-236, PEER REVIEW: EPA'S IMPLEMENTATION REMAINS UNEVEN (1996); see also NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 147 (comments of Ms. Stanley, U.S. Envtl. Protection Agency) (describing EPA's "integrated error correction process" that allows for correction of errors from "data processing, from mislabeling through transactions, from data input mistakes, inaccurate display and incompleteness of data."); NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 47-58 (comments of Dr. Teichman, U.S. Envtl. Protection Agency) (describing extensive peer review and quality assurance/quality control policies); James O'Reilly, Libels on Government Websites: Exploring Remedies for Federal Internet Defamation, 55 ADMIN L. REV. 507, 533-36 (2002) (describing EPA's post-publication error correction system). Although Alan Raul and Julie Dwyer maintain that the problems identified with EPA's peer-review practices "render it insufficient to remedy problems with agency science or to ensure reasoned decisionmaking", support for this statement is not provided in their references. See Raul & Dwyer, supra note 29, at 13.
80. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 156-63 (comments of Mr. Scanlon, U.S. Dep't of Health & Human Services ("HHS")) (detailing how HHS plans to "interface and integrate" OMB's data quality guidelines with HHS's "own existing, fairly significant and fairly successful set of information dissemination activities, including research and scientific enterprise activities and well-established constituencies and administrative mechanisms across HHS"); id. at 186-96 (comments of Ms. Kirkendall) (describing the existing "good practice" mechanisms already in place for ensuring that information is transparent and reproducible); NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 68-77 (comments of Mr. Rodgers, Federal Aviation Administration ("FAA")) (detailing FAA processes that ensure high quality data). Federally funded research is also governed by scientific misconduct requirements that provide their own set of protections against scientific fraud. See, e.g., 42 U.S.C. § 2896 (Supp. 2003) (establishing the Office of Research Integrity to oversee fraud in research conducted with federal monies).
81. See, e.g., POWELL, supra note 15, at 64 (observing, based on interviews of EPA personnel who complain about "particular individuals or groups [within EPA] being either 'Republican holdovers' or 'environmental activists,'" that there is "some partisan heterogeneity among EPA scientists and analysts"); id. at 70-71 (describing how science can be used by competing program offices within EPA in a way that is akin to "bureaucratic jousting" in "the struggle for bureaucratic supremacy"); SMITH, supra note 52, at 90-91 (describing differences in internal political affiliation at EPA from the 1970s through the 1980s that kept the science advisory process unstable and more diversified).

In his study of science policy in Congress, Bruce Bimber concludes that partisan politics helps ensure that the science that ultimately enters congressional debates is of relatively high quality; Congressmen who rely on flawed studies are derided for their scientific incompetence. See BRUCE BIMBER, THE POLITICS OF EXPERTISE IN CONGRESS: THE RISE AND FALL OF THE OFFICE OF TECHNOLOGY ASSESSMENT 3 (1996) (arguing and supporting the position that "expertise is a significant force in [federal] legislative politics," in part because it can withstand partisan politics); see also JASANOFF, supra note 40, at 82-83 ("Regulatory science also tends to be published and debated under more adversarial and less credulous circumstances than research science. Manipulated results or statistical errors are thus less likely to pass undetected through the decision-making process."). The same phenomenon might occur, although less vigorously, inside the agencies.
82. The influence of scientific norms on how agencies do their science was consistently evident in the agencies' reports of existing quality assurances for scientific and statistical information. See supra notes 78-80 and accompanying text.
83. JASANOFF, supra note 40, at 72.
84. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 18 (comments of Robert O'Keefe, Health Effects Institute) (observing the "exhaustive process" involved in re-analyzing the Six Cities Study and concluding that while "in this case it was widely seen as beneficial and appropriate . . . because these are lynchpin studies . . . . I think most folks would agree [this level of review and re-analysis] is really rarely justified"); see also Jocelyn Kaiser, Endocrine Disrupters: Synergy Paper Questioned at Toxicology Meeting, 275 SCIENCE 1879 (1997) (discussing the scientific controversy surrounding potential human health effects of endocrine disrupters and the uproar created by a recent Tulane study on the subject that was subsequently withdrawn).
85. ROBERT K. MERTON, The Ethos of Science, in ON SOCIAL STRUCTURE AND SCIENCE 267, 272 (Piotr Sztompka ed., 1996) (describing the reward system of science based on recognition by competitors, esteem, and publication).
86. Id. at 274-75 (describing how scientific fraud, which can be exposed because of the verifiability of data, can lead to sanctions within the scientific community); see also NRC, DATA ACCESS REPORT, supra note 54, at 6 ("Bench scientists understand that if they do not report accurately and honestly their methods, results, and conclusions, their reputation within the scientific community could be jeopardized. This reality has always been a powerful force for integrity.") (quoting Dr. Steven Goodman); Glenn Harlan Reynolds, "Thank God for the Lawyers:" Some Thoughts on the (Mis)regulation of Scientific Misconduct, 66 TENN. L. REV. 801, 814 (1999) (summarizing various scientific misconduct investigations and taking note of the humiliation and lost resources suffered by scientists under investigation).
87. See, e.g., THOMAS S. KUHN, THE STRUCTURE OF SCIENTIFIC REVOLUTIONS 24 (1970) (discussing the nature of scientific paradigms and how they create intolerance of scientific discovery); David F. Horrobin, The Philosophical Basis of Peer Review and the Suppression of Innovation, 263 JAMA 1438, 1440-41 (1990) (arguing that peer review weeds out articles that contradict "what everybody knows"); Eliot Marshall, Science Beyond the Pale, 249 SCIENCE 14 (1990) (arguing that peer review is biased against truly innovative ideas).
88. For a disparaging account of how lawyers' efforts to police "scientific misconduct" backfired, see Reynolds, supra note 86.
89. See, e.g., SIDNEY A. SHAPIRO & ROBERT L. GLICKSMAN, RISK REGULATION AT RISK: RESTORING A PRAGMATIC APPROACH 15 (2003) ("When Congress adopted risk regulation, it rejected the common law paradigm in favor of a regulatory system which would reduce technological risks before they caused significant harm to individuals and the environment. Congress accomplished this goal by designing statutory triggers that permit the government to act on the basis of anticipated harm."); see also infra notes 222-25 and accompanying text.
90. See, e.g., Talbot Page, A Generic View of Toxic Chemicals and Similar Risks, 7 ECOLOGY L.Q. 207, 241 (1978) (concluding after detailed modeling efforts of catastrophic environmental risks that "[e]xpected cost minimization and the three modifications appropriate to the special characteristics of environmental risk suggest a more precautionary management of environmental risk").
91. See 21 U.S.C. §§ 348(c)(3)(A), 379e(b)(5)(B) (2000) (stating that an additive may not be listed as safe if it is found to cause cancer in humans or animals). Section 348(c) has been amended by the Food Quality Protection Act ("FQPA") to remove this complete prohibition against carcinogens in pesticide residues found in processed food. See 21 U.S.C. § 321(s) (Supp. 2003).
92. See infra notes 224-25 and accompanying text.
93. See 33 U.S.C. § 1311(b)(2)(C)-(D) (2000) (referencing House Committee Report for list of 126 toxic substances for which technology-based standards must be promulgated under the Clean Water Act); 42 U.S.C. § 7412(b) (2000) (listing 189 air toxins for which technology-based standards must be promulgated).
94. See, e.g., Ethyl Corp. v. EPA, 541 F.2d 1, 24-28 (D.C. Cir. 1976) (en banc), cert. denied, 426 U.S. 941 (1976) (considering challenge to EPA's zero standard for lead in gasoline promulgated under the Clean Air Act, and concluding that given the precautionary nature of the statute, the administrator is not only authorized but arguably obliged to develop a standard in the face of scientific uncertainty).
95. Judge Williams concluded that, based on existing science:
the only concentration for ozone and PM [particulate matter] that is utterly risk-free, in the sense of direct health impacts, is zero. Section 109(b)(1) says that EPA must set each standard at the level 'requisite to protect the public health' with an 'adequate margin of safety.' . . . [Thus], for EPA to pick any non-zero level it must explain the degree of imperfection permitted.
Am. Trucking Ass'ns v. EPA, 175 F.3d 1027, 1034 (D.C. Cir. 1999).
96. See Chlorine Chemistry Council v. EPA, 206 F.3d 1286, 1290-91 (D.C. Cir. 2000) (concluding that the "best available" evidence on chlorine toxicity consisted of an advisory committee report concluding that there is likely to be a safe (non-zero) dose for the carcinogenicity of chlorine); infra Part V.
97. Envtl. Defense Fund v. EPA, 548 F.2d 998, 1004 (D.C. Cir. 1976) (holding that "the Administrator is not required to establish that the product is unsafe in order to suspend registration, since FIFRA places 'the burden of establishing the safety of a product requisite for compliance with the labeling requirements . . . at all times on the applicant and registrant'") (quoting Envtl. Defense Fund v. EPA, 510 F.2d 1292, 1297 (D.C. Cir. 1975)); id. at 1004-05 ("Reliance on general data, consideration of laboratory experiments on animals, etc. has been held a sufficient basis for an order canceling or suspending the registration of a pesticide. . . . Conversely, the statute places a heavy burden of explanation on an Administrator who decides to permit the continued use of a chemical known to produce cancer in experimental animals.") (quotation marks and citations omitted). Although there are no judicial interpretations of the 1996 Food Quality Protection Act, this statute also appears to require the agency to make protective assumptions in the face of scientific uncertainty and to require the registrants to establish that pesticide residues are safe. See 21 U.S.C. § 346a(a) (2000) (requiring pesticide tolerances to be set at a level that ensures a "reasonable certainty of no harm").
98. 42 U.S.C. §§ 6901-6992k (2000).
99. 42 U.S.C. §§ 9601-9675 (2000).
100. 42 U.S.C. §§ 11,001-11,005 (2000).
101. See, e.g., 42 U.S.C. §§ 6922-6924 (2000) (imposing increasingly onerous regulatory requirements on generators, transporters, and treatment, storage, and disposal facilities that handle hazardous wastes); 42 U.S.C. §§ 9603(a), 9607(a) (imposing reporting requirements and liability for cleanup on facilities that owned or handled hazardous substances); 42 U.S.C. § 11,023 (2000) (imposing various reporting requirements on facilities that handle or dispose of more than a threshold amount of specified hazardous substances).
102. See infra note 222 and accompanying text.
103. Like the Clean Air and Clean Water Acts, the substances that require toxic release reporting under the EPCRA were also specified by Congress in the authorizing legislation, and the agency is free of any requirement to establish their harm before requiring facilities to comply with the statute. See 42 U.S.C. § 11,023(c) (2000). But Congress has stated that EPA can add a substance to the list when it "is known to cause or can reasonably be anticipated to cause" a list of possible adverse effects. See id. at § 11023(d)(2)(B)-(C) (emphasis added). As a result, EPA has listed whole categories of chemicals without specific evidence of the toxicity of individual chemicals within those categories. See, e.g., Troy Corp. v. Browner, 120 F.3d 249, 260 (D.C. Cir. 1997) (holding that the agency was not "arbitrary and capricious" in listing some EPCRA chemicals by chemical family without evidence of the toxicity of individual chemicals).
104. In an NAS workshop, agency representatives and other commentators discussed a wide range of ways to implement the Data Quality Act, as well as the range of interpretations about how compliance with the statute might be enforced by third parties. Several speakers expressly underscored the great uncertainty surrounding the effects of the statute on agency practices. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 174 (comments of Professor Pierce) (predicting with the "greatest confidence" that "it is going to be decades before we actually know all of the effects of the guidelines. We can't know those effects until scores of action agencies have adopted their own guidelines and then applied those guidelines in a very large number of cases."); NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 4-5 (comments of Mr. Taylor) (remarking on how the data quality requirements could have minimal effects on agencies or could "bring about very fundamental change"). By tracing the possible adverse effects of these reforms, I do not mean to suggest that their effects are clear or certain. On the other hand, and particularly at this early stage in the life of the reforms, it is useful to begin to model the ways that they can work to thwart, rather than improve, the regulatory process. This same goal appeared to inspire the series of workshops sponsored by the National Academies. See supra notes 9, 24, 54.
105. To the extent that the reforms are not implemented or are reconfigured to be largely superfluous, the predicted adverse effects seem unlikely to materialize. Since the reforms are proposed in ways that suggest they are intended to be anything but superfluous, however, it will be assumed that the reforms will make some difference to the status quo. See, e.g., Shelby, supra note 18, at 383-89 (arguing that OMB did not develop guidelines that were as aggressive as intended). To anticipate the potential mischief the reforms might cause, it is also assumed for the sake of argument that the reforms will be implemented aggressively.
106. See, e.g., JASANOFF, supra note 40, at 37 ("[I]n a politicized environment such as the U.S. regulatory process, the deconstruction of scientific 'facts' into conflicting, socially constrained interpretations seems more likely to be the norm than the exception."). Jasanoff points out that this deconstruction even occurs within the scientific community over questions such as the meaning of "good science." Id. at 76 (noting in a particular case study how prominent scientists with differing disciplinary backgrounds "can disagree profoundly on something so basic to 'good science' as the appropriate method of selecting and reporting on controls").
107. There seems to be a general consensus that science typically serves as a good fairy to political actors, including agencies, and that they tend to use it to justify their activities whenever possible. See, e.g., POWELL, supra note 15, at 148 ("Whether unconsciously or wittingly, policymakers often seek legitimacy or political cover for their judgments from science or scientists."); MARK E. RUSHEFSKY, MAKING CANCER POLICY 6 (1986) ("Science, in its regulatory incarnation, is used to forward political goals by all sides in the disputes."); Wagner, supra note 7, at 1617 (arguing that agencies engage in a "science charade" "in order to avoid accountability for underlying policy decisions").
108. In one of the most significant data quality challenges to date, for example, industry petitioners argue that EPA cannot include in its risk assessment a series of studies published by Dr. Tyrone Hayes and colleagues at the University of California, Berkeley that observed significant endocrine effects on frogs exposed to low levels of atrazine, an herbicide. See Kansas Corn Growers Association, the Triazine Network, & the Center for Regulatory Effectiveness, Request for Correction of Information Contained in the Atrazine Environmental Risk Assessment, Docket No. OPP - 34237A, at 2 (Nov. 25, 2002), available at [hereinafter Atrazine Petition].
109. Under the Data Quality Act, the act of interpreting studies and disseminating the results appears to require agencies to ensure that their analysis is transparent and objective. Simply summarizing the results of a study, without any interpretation or review, however, seems to bypass the requirement-since the agency is simply repeating the literature-and would automatically satisfy OMB's requirements for objectivity. See Data Quality Guidelines, supra note 25, § V.3.a (objectivity is ensured when an agency presents information in "an accurate, clear, complete, and unbiased" way); NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 97 (comments of Alan Morrison) (suggesting that the requirements kick in only after the agency "puts its own interpretation" on studies since that constitutes a "new generation and hence dissemination of information").

There is also the more practical problem of how to disseminate an agency's decision to correct information. Especially if complaints and appeals arise continuously and pertain to identical information, agencies might find their information in a constant state of flux. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 141 (comments of Dan Cohen) (remarking that if an agency corrects information and then changes its mind, "we take down that notice and we have really confused the hell out of everybody basically because nobody knows now what the status the particular item of information might be in at the time that they are looking at the web site or what the information was at the time that they had looked at the web site").
110. Professor McGarity has described the "ossification" of rulemakings mandated by Congress. See generally McGarity, supra note 69. Procedural changes in agencies' voluntary dissemination practices seem almost certain to slow or halt those efforts. For reinforcement of speculation along these lines, see NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 86-87 (comments of Alan Morrison) ("[T]here will almost certainly be some slowing down of the dissemination of information. Quality assurances cannot come without some price."); O'Reilly, supra note 24, at 845-56 (speculating that "agency staff, who had used dissemination of information as a painless, remedy-less vehicle, might now be held back by the pressures for accuracy").
111. See infra notes 136-39 and accompanying text.
112. See supra Part III.C. (discussing the low evidentiary burden agencies encounter under most precautionary statutes). It appears that the Data Quality Act cannot be used to force an agency to disseminate information, although the Center for Regulatory Effectiveness has argued that the OMB Guidelines "legally require" agencies to consider all evidence pertaining to an issue. See, e.g., Tozzi Letter, supra note 10. This does not, however, necessarily preclude agencies from promulgating regulations when the evidence is thin.
113. See Data Quality Guidelines, supra note 25, § V.3.b.ii ("If an agency is responsible for disseminating influential scientific . . . information, agency guidelines shall include a high degree of transparency about data and methods to facilitate the reproducibility of such information by qualified third parties."). Advice on how to avoid this higher burden for influential information was one of the primary messages at the first NAS workshop on the Data Quality Act held in March 2002. Various speakers from within and outside the agencies repeated the importance of avoiding classifying information as "influential" whenever possible because of the much higher burden imposed on the agency to ensure the information's validity. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 48-52 (comments of Alan Morrison) (advising agencies to avoid labeling information as influential except in exceptional circumstances); id. at 181 (comments of Professor Pierce) (advising agencies that because influential information is "subject to heightened duties," they should work backwards and consider whether they can meet the higher burden). Workshop participants also recognized that avoiding classifying information as influential would have countervailing costs on the agency, since it makes work appear less significant. See, e.g., id. at 189 (comments of Ms. Kirkendall) (suggesting that the Government Performance and Results Act is the "only reason I can see to define [information] influential"); id. at 196 (comments of Mr. Siskind, U.S. Dep't of Labor) (observing that "if I were to dare suggest that nothing at the Department of Labor was influential, I would be at OPM tomorrow morning filling out my retirement papers").
114. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, at 176-77 (comments of Professor Pierce) (predicting that agencies will find ways to avoid the DQA guidelines and that among those tricks will be replacing "many disseminations of information with expressions of opinions by agency officials" and "tak[ing] a lot of information off of their web sites and provid[ing] it to private parties putatively unrelated to them and [putting] links on their web sites to those private parties' web sites").
115. Professor Jasanoff underscores these types of dangers in discussing the implications of social constructionism for legal efforts to delineate good science from bad. For example, she notes that sociologists of science have observed that a consensus of scientists needs to coalesce around a methodology before it is considered valid, but even then "the knowledge that a 'thought collective' or scientific subculture holds in common may be impossible to reduce to formal principles; rather, it may consist in large part of tacit knowledge, experience, and skill." Sheila Jasanoff, Research Subpoenas and the Sociology of Knowledge, 59 LAW & CONTEMP. PROBS. 95, 99 (Summer 1996). Jasanoff concludes that as a result of the socially constructed nature of science,
[s]cientific claims are especially prone to being pulled apart, or deconstructed, when they are used to justify significant legal or political decisions. . . . An expert witness who happens not to subscribe to the governing consensus can sow reasonable doubt about the reliability of particular scientific practices, the adequacy or completeness of specific interpretive frameworks, the comparative saliency of different kinds of evidence, and the credibility of competing experts. In short, litigation presents not so much a contest between 'true' and 'false' believes as a test of the strength and unanimity of the prevailing consensus.
Id. at 100.
116. Meta-analysis is "the statistical analysis of a large collection of . . . results from individual studies for the purpose of integrating findings." G.V. Glass, Primary, Secondary, and Meta-Analysis of Research, 5 EDUC. RESEARCHER 3 (1976). Assessments of cumulative risks, for example, rely on a number of different studies and necessarily require multiple policy choices and assumptions to extrapolate from individual studies to a final result.
117. See, e.g., Memorandum from Christine Todd Whitman, Administrator, U.S. Envtl. Protection Agency, to Assistant, Associate, and Regional Administrors and the Science Policy Council 1 (Feb. 7, 2003), available at ("As you know, EPA's Information Quality Guidelines have become more effective, and our use of models has become more visible to our stakeholders and the public."); see also POWELL, supra note 15, at 104 (observing that "[w]hile there is broad agreement in principle that cumulative effects are appropriate considerations in regulatory decision-making, there is no consensus regarding the best practices for doing so, which probably contributes to the controversy surrounding Superfund's use of science"); infra notes 167-74 and accompanying text.
118. The Competitive Enterprise Institute ("CEI"), for example, has filed two DQA petitions demanding that the National Oceanic and Atmospheric Administration ("NOAA") (and, by association, the White House Office of Science and Technology Policy) revoke its model for predicting climate change. The CEI challenged a number of inevitable assumptions and methodological choices made by NOAA that are endemic to most modeling exercises. See Competitive Enterprise Institute, Petitions to Cease Dissemination of the National Assessment on Climate Change, Feb. 20, 2003, available at CEI sought judicial review of its denied petitions in August 2003. See Press Release, Competitive Enterprise Institute, CEI Global Warming Suit Draws Ire of Northeast States Attorneys General (Aug. 23, 2003), available at,03598.cfm.
119. Cf. infra note 214 (recounting Justice Breyer's criticisms of tunnel vision in agency decisionmaking).
120. See generally NRC, DATA ACCESS REPORT, supra note 54, at 8-12 (describing the history and re-analysis of the Six Cities Study).
121. Commentators at the NAS workshop were unable to resolve questions about how the study would have faired under the new Data Quality Act requirements, but it appeared clear that they would have resulted in added delay and numerous compliance questions. See NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 212-16; see also NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 18 (presentation of Robert O'Keefe) (noting that if the re-analysis of Six Cities Study had come after the Data Quality Act was passed, it "might have actually slowed down a regulatory decision, which can carry a different set of problems and is probably to be avoided").
122. The agencies' success in forging ahead depends in large part on whether the courts can review agency information before a rulemaking is final. For speculation on this issue, see supra note 28.
123. See supra note 75 (describing agency paralysis after the Supreme Court's ruling in the "Benzene Case," Industrial Union Department v. American Petroleum Institute, 448 U.S. 607 (1980)).
124. It is conceivable that stakeholders might argue that the OMB Guidelines require agencies to revise rules when new information becomes available that suggests such a revision. It is far too early to predict how this argument might be received by the courts, but if it were given credence, the amount of litigation against agency rules could increase exponentially given the seemingly infinite arguments regarding the point at which a rule should be revised. See Tozzi Letter, supra note 10.
125. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 178 (comments of Professor Pierce) (noting that agencies will find that it is "hard to [correct bad information] quickly and effectively because [the agency] is disseminating information . . . that . . . complies with the Guidelines").
126. It took nearly thirty years for EPA to develop guidelines for conducting ecological risk assessments. See U.S. Envtl. Protection Agency, Guidelines for Ecological Rick Assessment, 63 Fed. Reg. 26,846 (May 14, 1998).
127. See, e.g., Thomas O. McGarity, Politics by Other Means: Law, Science, and Policy in EPA's Implementation of the Food Quality Protection Act, 53 ADMIN. L. REV. 103, 142-43 (2001) (discussing how EPA requires neurotoxicity testing only on a subset of pesticides because of the expense of these types of tests); cf. POWELL, supra note 15, at 30 ("According to a former senior EPA official, the pesticides program is the only regulatory area that routinely considers noncancer health effects.").
128. There is no systematic collection of ambient air-quality data for toxins: While monitoring of various hazardous air pollutants occurs around the country, these recent monitoring programs are fragmented and typically collect only a small portion of the larger universe of hazardous air pollutants. See AIR TOXICS MONITORING STRATEGY SUBCOMM., U.S. ENVTL. PROTECTION AGENCY, AIR TOXICS MONITORING CONCEPT PAPER (2000), available at There is limited toxicity testing available for most chemicals in commerce, although the high-volume production challenge will help with a subset of these chemicals. See generally ENVTL. DEFENSE FUND, TOXIC IGNORANCE 15 (1997) (reporting that seventy-one percent of the chemicals produced in high volumes in the random subsample did have toxicity testing available that would satisfy the minimum OECD standards); NAT'L RESEARCH COUNCIL, TOXICITY TESTING: STRATEGIES TO DETERMINE NEEDS AND PRIORITIES 84 tbl. 7, 94 tbl. 10, 117 tbl. 20 (1984) (concluding that no toxicity data were available for approximately eighty percent of the tens of thousands of chemicals in commerce); SOCMA Letter, Concept Paper on Alternative Approaches for HPV Chemical Testing Dated Dec. 22, 1998, 8 Daily Envtl. Rep. (BNA), at E-1 (Jan. 13, 1999) (noting that HPV testing will miss chemicals produced in lower volumes that might be of much higher risk). In 1992, the Council on Environmental Quality concluded that the paucity of data available on the quality of surface waters "preclude[s] assessing an overall national water quality trend." COUNCIL ON ENVTL. QUALITY, THE TWENTY-SECOND ANNUAL REPORT OF THE COUNCIL ON ENVIRONMENTAL QUALITY 187 (1992); see also ADLER ET AL., supra note 17, at 33 ("Lack of federal leadership has resulted in the complete absence of monitoring in some states and in substantial variations in testing methods and closure standards. Only four states use EPA's recommended testing method.") (citing 1992 and 1993 Natural Resources Defense Council ("NRDC") reports) .
129. This is clearly the crux of the most significant data quality petitions filed to date. In the atrazine petition, see supra note 108, industry petitioners argued that the Berkeley research on the endocrine effects of atrazine should be excluded from EPA's risk assessment until EPA formally validated the protocol and the research. In the petition against NOAA's climate change model, see supra note 118, petitioners argue that the model is invalid because of contestable assumptions and methodological choices, but they offer no superior alternative models or methods. In a petition filed by congressmen challenging EPA's decision to temporarily exempt the oil and gas industry from Clean Water Act regulations, the petitioners argue that the agency's documented basis for the exclusion was insufficient and that the agency needed more rigorous documentation to exclude these particular polluters. See Letter from Jim Jeffords to Christine Todd Whitman, Administrator, U.S. Envtl. Protection Agency (March 6, 2003), available at These diverse challenges to EPA information might be acceptable in the context of a rulemaking, but in isolation and considered wholly apart from a mandate, they seem to drive the agency towards documenting each decision in tedious detail irrespective of the demands of the relevant statute.
130. See, e.g., NAS, STRENGTHENING SCIENCE, supra note 44, at 27-28 (pointing out that research funding through EPA's research arm, the Office of Research and Development, is approximately seven percent of the agency's total budget and graphically showing how the funding has remained relatively flat from 1980 to 2000, even though EPA's larger budget has fluctuated); POWELL, supra note 15, at 149-50 (recommending, based on book-length study of science at EPA, that EPA's science budget be increased substantially to provide the agency with needed research support).

Over the past year, funding conditions appear to be getting worse rather than better. On February 28, 2002, EPA announced that President Bush's 2003 Fiscal Year Budget did not request funds for the STAR graduate fellowship grant program for the first time in the history of the program, which was established in 1995. See U.S. Envtl. Protection Agency, Star Grant News, at (last visited Sept. 5, 2003). The program, which awards $10 million a year to students pursuing graduate degrees in environmental science, policy, and engineering, is the "only federal program specifically designed to support top students going into environmental science," according to David Blockstein of the National Council for Science and the Environment. Christie Todd Whitman, then-administrator of EPA, defended the program last year, stating that it "successfully engage[s] the best environmental scientists . . . from academia." Dr. Daniel I. Rubenstein, chairman of the ecology and evolutionary biology department at Princeton, has said that "if the goal is to formulate policy . . . based on science so that it is made effective, then this program is a way to ensure that the next generation of scientists are in the pipeline." Over 1350 applications had been made for the 2003 program. The program's funding was apparently discontinued as part of a move to merge financing for environmental education under the National Science Foundation. White House Ends Environmental Fellowship, N.Y. TIMES, Apr. 14, 2002, at A32.
131. See, e.g., Clayton P. Gillette & James E. Krier, Risk, Courts, and Agencies, 138 U. PA. L. REV. 1027, 1038 (1990) ("Commonly, producer firms simply won't have good information about risk (often because, as we shall see, they are not stimulated to have it) or, if they do, won't act on it or share it with typically underinformed consumers and employees."); Lyndon, supra note 16, at 1810-17 (describing market failure in safety testing of chemical products).
132. Even the allocation of federal research dollars might become more politicized, reinforcing existing knowledge gaps in certain areas of environmental and public health regulation to stave off future regulations or legislation.
133. Unless a regulated entity is certain of the outcome, conducting toxicity and other research on products and activities seems as likely to increase regulatory requirements as to avoid them, since the results could indicate that a substance or activity is even more harmful than originally supposed. See Lyndon, supra note 16, and accompanying text.
134. Most private research might be exempt from the Data Quality Act, however. See infra Part IV.A.2.b.
135. Cf. Joseph Sanders, The Bendectin Litigation: A Case Study in the Life Cycle of Mass Torts, 43 HASTINGS L.J. 301, 337 (1992) (describing Merrell's research conducted after the Bendectin litigation as a "lose-lose proposition" because "if they showed an effect, the studies would be used against the company" and if they did not "[a]ny slight technical flaw in the design or execution of the experiment would be exploited by plaintiffs to undermine Merrell's findings").
136. See, e.g., NRC, DATA ACCESS REPORT, supra note 54, at 2 (reporting that scientists oppose the Data Access Amendment "on the grounds that it would invite intellectual property searches by industry and scientific competitors, jeopardize the privacy of research subjects, decrease the willingness of research subjects to participate in studies, expose researches to deliberate harassment, and increase costs and paperwork"); id. at 14 (reporting the view of an invited speaker, Dr. Bruce Alberts, that "there is a danger that the [Data Access A]mendment could be used to harass scientists whose work is found objectionable by anyone, for any reason"); id. at 20 (recounting similar concerns from another invited speaker, Judge Jack Weinstein); Frederick R. Anderson, Science Advocacy and Scientific Due Process, ISSUES IN SCI. & TECH., Summer 2000, at 71 (advocating a balanced approach to ensuring the credibility of scientific information because of the dangers of abusing legal tools "to harass and intimidate researchers by impugning their integrity or motives, chill new research, increase the costs of research, and deter volunteers for research").
137. See, e.g., NRC, DATA ACCESS REPORT, supra note 54, at viii (recounting the concerns of both organizations); Letter from Albert Teich, American Association for the Advancement of Science, to Brooke Dickson, Office of Management & Budget 4 (August 13, 2001) (on file with author) (expressing concern that "the draft guidelines as proposed by OMB, however well intentioned, will have a deleterious effect on scientific research"). The potential for abusing the good-science reforms to harass scientists is possible for all three reforms. While data access requests are the most obvious form of harassment, the Data Quality Act might provide a more indirect but equally damaging vehicle for diverting scientists from their work-for example, through requests for original data or demands that an agency acquire such data to ensure that a study is reproducible. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 28-30 (comments of Keven Bromberg, U.S. Small Business Administration, and John D. Graham, Office of Information and Regulatory Affairs, Office of Mgmt. & Budget) (discussing this hypothetical without resolving it). Complaints can also be used to discredit a researcher. See infra note 138.
138. In the data quality petition filed to exclude Dr. Hayes' atrazine studies from EPA's atrazine risk assessment, industry petitioners complained that "Dr. Hayes has killed and continues to kill thousands of frogs in unvalidated tests that have no proven value." Atrazine Petition, supra note 108, at 8. Hayes' scientific credibility is similarly questioned in a number of related critiques. See, e.g., Press Release, Center for Global Food Issues, Frog Sex-Change Claims Flawed (Oct. 30, 2002), available at; Steven Milloy, Freaky-Frog Fraud (Nov. 8, 2002), available at,2933,69497,00.html; Steven Milloy, Frog Study Leaps to Conclusions (Apr. 19, 2002), available at,2933,50669,00.html.

The Daubert proposal could pose similar risks, since it could require the disclosure of underlying data and implicate original research scientists, requiring them to defend their work. Cf. Anthony Z. Roisman & Ned I. Miltenberg, "Two Strikes and You're Out:" How to Make Sure Lawyers Adequately Protect Your Reputation in Court and Prevent You From Being Ostracized as a "Junk Scientist," SF97 ALI-ABA 283, 283-294 (2001) (describing how Daubert has been used to exclude the testimony of distinguished scientists and discredit reputable scientists, listing thirty grounds for excluding testimony and observing that, as a result, "scientists should be prepared to explain and defend-in excruciating detail-every option they considered, every choice they made (and which ones they rejected), and everything they did (or chose not to do)"). Other good-science reforms, such as federal programs governing scientific misconduct and the ability to subpoena third-party scientists and their data, have been abused in the past to harass scientists and, in at least one case, to shut down a research project. RJ Reynolds served third-party subpoenas on Dr. Paul Fischer and his co-author requesting all documents, including confidential records, related to the ongoing study of the effects of the Joe Camel advertising campaign on children. A consultant with the company also charged Fischer and his colleague with engaging in scientific misconduct, charges that were ultimately resolved in the co-authors' favor. The harassment led Dr. Fischer to resign his tenured post at the Medical College of Georgia and return to family practice in a nearby community. See Paul M. Fischer, Science and Subpoenas: When Do the Courts Become Instruments of Manipulation?, 59 LAW & CONTEMP. PROBS. 159, 159 (Summer 1996); see also Steven Picou, Compelled Disclosure of Scholarly Research: Some Comments on "High Stakes Litigation," 59 LAW & CONTEMP. PROBS. 149, 155 (Summer 1996) (describing how third-party subpoenas he received in connection to research relevant to the Exxon oil spill litigation "permanently disrupted" his research project "due to the constant need to respond to motions and affidavits," how Exxon worked to deconstruct his research to undercut the plaintiffs' evidence, and how this called into question his professional integrity).
139. In an interview, one of the CRE's staffers conceded that the letters were intended as a thinly veiled threat to federal research funding. See Industry Data Quality Warning to Universities Draws Sharp Response, INSIDE EPA (Inside Washington Publishers, Arlington, Va.), Aug. 22, 2003 (on file with author) ("If we really start to invoke this, millions of federal government research dollars couldn't be used. . . . We've been nice up to now. Rounds two and three, we'll be more direct."); see also NRC, DATA ACCESS REPORT, supra note 54, at 14 (reporting presenter Dr. Bruce Alberts's concern that the data access provision could be abused in ways that might ultimately "discourage the best young people from choosing careers in science"); cf. id. at 27 (comments of Richard Merrill) (noting that many researchers might not be aware of the reporting requirements and might be caught off-guard).
140. See, e.g., Data Quality Guidelines, supra note 25, at 8455 (citing a study in Science questioning the value of journal peer review in improving the quality of research papers and concluding that "OMB believes that additional quality checks beyond [journal] peer review are appropriate").
141. See, e.g., ROGER TRIGG, RATIONALITY & SCIENCE: CAN SCIENCE EXPLAIN EVERYTHING? 109 (1993) ("One of the prized virtues of science is its objectivity, its apparent willingness to be led by evidence alone and not by prejudice."); Robert K. Merton, The Normative Structure of Science, in THE SOCIOLOGY OF SCIENCE 267, 275 (1973) (identifying honesty, objectivity, and disinterestedness as norms constituting the universal "methods of science").
142. See, e.g., DARYL E. CHUBIN & EDWARD J. HACKETT, PEERLESS SCIENCE: PEER REVIEW AND U.S. SCIENCE POLICY 2 (1990) (defining peer review as "an organized method for evaluating scientific work which is used by scientists to certify the correctness of procedures, establish the plausibility of results, and allocate scarce resources (such as journal space, research funds, recognition, and special honor)"). OMB's peer-review guidelines similarly emphasize the need for reviewers to identify and avoid conflicts of interests and bias. See Memorandum from John D. Graham, Administrator, Office of Information and Regulatory Affairs, to the President's Management Council (Sept. 20, 2001), available at (expressing OMB's recommendation that peer reviewers disclose prior technical/policy positions and their sources of personal and institutional funding).
143. See, e.g., Memorandum from Elissa R. Karpf, Deputy Assistant Inspector General for External Audits, U.S. Envtl. Protection Agency, to Assistant and Regional Administrators, EPA OIG Report No. 1999-P-217 (1999), available at (emphasizing the need to ensure the independence of peer reviewers); see also CHUBIN & HACKETT, supra note 142, at 94 (arguing that caprice and bias, such as favoring famous authors, might play a larger role than the quality of the authors' work); JASANOFF, supra note 40, at 69-71 (discussing studies that purport to show the influence of various forms of bias on the peer-review process).
144. For example, in her research on EPA's effort to develop cancer guidelines, Professor Jasanoff observes that "[t]he adversarial rule-making approach, in which scientists functioned as just another interest group, failed . . . to build the kind of technical support the agencies needed. It was only when scientists were consulted through the channels of consensus workshops and regulatory peer review that their opinions began coalescing in ways that were supportive of policy." JASANOFF, supra note 40, at 206; see also The Regulatory Improvement Act of 1999: Hearing on S. 746 Before the Senate Comm. On Governmental Affairs, 106th Cong. 33, 117-18 (1999) (comments of John Graham, Director, Harvard Center for Risk Analysis) ("Some scientists currently serve as hired consultants to specific stakeholder groups but the testimony of stakeholder groups is not a substitute for independent, objective peer review.").
145. Professor Jasanoff cites H.M. Collins, who coined the term "experimenter's regress" to describe the ability of those who attempt to replicate an experiment and fail to generate "a potentially endless series of questions designed to probe every aspect of the 'failed' experiment, from the reliability of the instruments used to the honesty of the researchers." Jasanoff, supra note 115, at 99 (footnotes omitted). Yet replication problems might not be the result of a bad researcher or bad research, but instead simply the result of the use of methods that have not yet achieved "consensus" and thus remain vulnerable to attack. Id. ("Agreement is required to establish that one result is 'the same' as another, because absolute identity in experimental conditions or in results can never be achieved. . . . Standardization of research protocols and evaluation criteria is one widely accepted safeguard against 'experimenter's regress.'"); see also JASANOFF, supra note 40, at 83 (observing that "a contentious regulatory environment may promote indefinite deconstruction of scientific claims and may limit the possibilities for negotiated settlement of technical arguments"); cf. NRC, DATA ACCESS REPORT, supra note 54, at 5 (recounting comments by Dr. Steven Goodman about the "shades of gray" quality of the truth of scientific studies and observing that "although specific scientific research may support the hypothesis that a particular claim is true or false, it is important to realize that the researcher's original hypothesis is subject to revision over time").
146. JASANOFF, supra note 40, at 81. Professor Jasanoff goes on to observe that "[i]t has been amply documented that technical trained adversaries can exploit uncertainties in the scientific knowledge base to construct evaluations consistent with their political objectives." Id. "Many of the factors that diminish the credibility of peer review in the research setting seem likely to exert an even more negative influence in the regulatory environment." Id. at 79.
147. See id. at 62 (arguing more generally that because of the socially constructed nature of science, interjecting peer review into an adversarial regulatory setting is not likely to prove fruitful).
148. See supra notes 29-30 and accompanying text (describing this reform). To the extent that the Data Quality Act provides judicial review on challenges, it might resemble Daubert in extending to the courts the role of final peer reviewer. See supra note 28.
149. See, e.g., PROJECT ON SCIENTIFIC KNOWLEDGE AND PUBLIC POLICY, DAUBERT: THE MOST INFLUENTIAL SUPREME COURT RULING YOU'VE NEVER HEARD OF (June 2003), available at (identifying errors in courts' Daubert rulings); David S. Caudill & Richard E. Redding, Junk Philosophy of Science?: The Paradox of Expertise and Interdisciplinarity in Federal Courts, 57 WASH. & LEE L. REV. 685, 765-66 (2000) (arguing that "[t]he definition of science announced in Daubert was ambiguous," and concluding that there is considerable variation in how the lower courts apply the vague standard); Devra Lee Davis, The "Shotgun Wedding" of Science and Law: Risk Assessment and Judicial Review, 10 COLUM. J. ENVTL. L. 67, 86, 92, 98 (1985) (discussing inconsistent judicial review of agency risk assessments); Elliott et al., supra note 29 at 10,135-36 (comments of Professor McGarity) (questioning judicial competence to review agency risk assessments); Adina Schwartz, A 'Dogma of Empiricism' Revisited: Daubert v. Merrell Dow Pharmaceuticals, Inc. and the Need to Resurrect the Philosophical Insight of Frye v. United States, 10 HARV. J. L. & TECH. 149, 177-79, 196 (1997) (arguing that the Daubert Court's understanding of the philosophy of science was badly misguided); cf. Abraham & Merrill, supra note 74, at 94 (identifying three separate approaches courts take in reviewing agency risk assessments-deference, avoidance, and confrontation).
150. Int'l Harvester Co. v. Ruckelshaus, 478 F.2d 615, 650-51 (D.C. Cir. 1973) (Bazelon, C.J., concurring) ("Socrates said that wisdom is the recognition of how much one does not know. I may be wise if that is wisdom, because I recognize that I do not know enough about dynamometer extrapolations, deterioration factor adjustments, and the like to decide whether or not the government's approach to these matters was statistically valid.") (footnote omitted) .
151. See Elliott et al., supra note 29, at 10,137 (comments of Professor Pierce) (urging caution in adopting regulatory Daubert because judges are less competent than agencies, unlike the situation in civil jury trials).
152. See, e.g., Abraham & Merrill, supra note 74 (describing these divergent approaches to judicial review as constituting much of the universe of judicial review).
153. In underscoring why judges are less competent in science than agencies, Professor Pierce describes a passage in the Supreme Court's opinion the Benzene Case, where the plurality
gives an illustration of a risk it considers real bad, and a risk it considers trivial. Anyone who has had Toxicology 101, even if they got a D in it, can see that the risk that the court calls trivial is much larger than the risk the court calls plainly unacceptable. I don't want fools like that messing around with science, and that's the best of our judiciary.
Elliott et al., supra note 29, at 10,137 (comments of Professor Richard J. Pierce) (discussing Indus. Union Dep't v. Am. Petroleum Inst., 448 U.S. 607 (1980)) (footnotes omitted); see also Gulf S. Insulation v. United States Consumer Prod. Safety Comm'n, 701 F.2d 1137, 1146 (5th Cir. 1983); JASANOFF, supra note 40, at 49 (observing that "[j]udicial review produced a paradigm for resolving science policy disputes, but one that appears in hindsight to have been founded on an overly simplified view of both science and policy"); Nicholas A. Ashford et al., A Hard Look at Federal Regulation of Formaldehyde: A Departure from Reasoned Decisionmaking, 7 HARV. ENVTL. L. REV. 363-68 (1983) (critiquing the court's review of science in Gulf South); cf. Carl F. Cranor & David A. Eastmond, Scientific Ignorance and Reliable Patterns of Evidence in Toxic Tort Causation: Is there a Need for Liability Reform?, 64 LAW & CONTEMP. PROBS. 5, 26-34 (Autumn 2001) (describing the courts' unimpressive scientific competency in screening expert evidence under Daubert).
154. The tendency of D.C. Circuit judges to misuse procedural requirements as a mechanism for "selectively" invalidating agency regulations the judges find politically unpalatable has been documented by several prominent scholars. See, e.g., Pierce, supra note 75, at 303-07 (documenting courts' demonstrated political biases in applying what should be objective procedural rules in reviewing agency rulemakings); Richard L. Revesz, Environmental Regulation, Ideology, and the D.C. Circuit, 83 VA. L. REV. 1717, 1717 (1997) (concluding that a judge's ideology does impact judicial decisionmaking, especially in reviewing agency rulemakings under process-based requirements); Emerson H. Tiller, Controlling Policy by Controlling Process: Judicial Influence on Regulatory Decision-Making, 14 J.L. ECON. & ORG. 114, 132 (1998) ("The ability of an appellate court to control the behavior of an agency by imposing process requirements plays an important role in determining regulatory policy outcomes.").
155. Cf. O'Reilly, supra note 79, at 30 (documenting examples of information errors posted on agency websites that would equate to libels in the private sector and then observing how the Data Quality Act effectively exempts these problems through "loopholes" that include exempting "press releases, public filings, and charges made by agencies in their adjudicative processes").
156. See supra note 39 and accompanying text.
157. See Circular A-110, supra note 22, § .36(d)(1) (requiring research findings to be produced if they were "produced under an award that [was] used by the Federal Government in developing an agency action that has the force and effect of law"). Research funded by private institutions is thus exempted from the data access requirements. See, e.g., NRC, DATA ACCESS REPORT, supra note 54, at 27 (comments of Richard Merrill) (expressing concern over the fact that the Shelby Amendment "is not bilateral in its application" since it does not apply "to data that [are] generated by private dollars that [are] submitted to support agency decisions"); id. at 16 (reporting that panelist David Hawkins, representative of a public interest advocacy group, criticized the Shelby Amendment for being "'one-sided' because it applies only to federally funded research" and not to "industry-supported studies that have been submitted on a confidential basis to an agency to assert a claim of economic harm").
158. See Pub. L. No. 106-554, § 515, 144 Stat. 2763 (2001) (applying only to information "disseminated by Federal agencies").
159. See Data Quality Guidelines, supra note 25, § V.8 (defining "dissemination" as "agency initiated or sponsored distribution of information to the public . . . [but not including] distribution limited to correspondence with individuals or persons, . . . public filings, . . . or adjudicative processes"); NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 60 (comments of Alan Morrison) (observing that the issuance of a permit constitutes an adjudication under the APA). Interestingly, at the first NAS workshop on the DQA, an industry spokesperson complained that EPA sometimes gives more weight to published studies than to industry-generated studies submitted pursuant to EPA protocols. NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 86 (comments of Ray McAllister, CropLife America) (expressing concern that EPA "reach[es] into the open literature for information that it will use in making a pesticide decision and that though that literature may be peer reviewed, . . . we believe [it] complies with a much lower quality of standards in terms of transparency and reproducibility to trump the data produced under higher quality standards by manufacturers in making a pesticide decision").
160. See Data Quality Guidelines, supra note 25, § V.8 (defining "dissemination" as "agency initiated or sponsored distribution of information to the public . . . [but] does not include, . . . public filings, . . ."). "Toxic chemical release forms" required under section 313(a) of EPCRA, 42 U.S.C. § 11,023 (2000), will presumably be considered "public filings."
161. See Data Quality Guidelines, supra note 25, § V.3.b.ii.B.i (stating that the requirement that data and methods be made publicly available does not "override other compelling interests such as privacy, trade secrets, intellectual property, and other confidentiality protections"); see also NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 128-29 (comments of Dr. Galson, U.S. Food & Drug Admin.) (noting that FDA approvals are largely based on industry-generated data and that "much of this is considered confidential business information. It is closely held by the sponsors."). But see Thomas O. McGarity & Sidney A. Shapiro, The Trade Secret Status of Health and Safety Testing Information: Reforming Agency Disclosure Policies, 93 HARV. L. REV. 837, 838, 887 (1980) (arguing that trade-secret status should not extend to much health and safety testing information).
162. See, e.g., General Accounting Office, Toxic Substances: EPA Should Focus Its Chemical Use Inventory on Suspected Harmful Substances, GAO/RCED-95-165, at Letter 3.3, available at (last visited Sept. 11, 2003) ("EPA believes that the number of claims of confidential information made by the industry under TSCA has been excessive and that many of these claims have been inappropriate."); Mary L. Lyndon, Secrecy and Innovation in Tort Law and Regulation, 23 N.M. L. REV. 1, 22-35, 34-35 (1993) (outlining the prominence of trade secrecy claims under major regulatory statutes and observing that, "For a worker or neighbor seeking data from a company, trade secret information is, as a practical matter, simply unavailable. There is no incentive for an employer to disclose.") (citations omitted). Only the ambiguous Daubert reform might reach these privately produced studies, although one would need access to the studies in order to challenge them, something that existing laws and the good-science reforms still do not provide. See supra Part IV.A.2.b.
163. See Data Quality Guidelines, supra note 25, § III.3 (outlining the complaint process for "affected persons," but not placing any responsibility on affected persons to identify the ways in which they are affected).
164. Objectivity is one of the four qualities required of regulatory science under the Data Quality Act. See Pub. L. No. 106-554, § 515, 144 Stat. 2763 (2001). For the routine requirement of conflict disclosures in scientific journals, see INT'L COMM. OF MED. JOURNAL EDITORS, UNIFORM REQUIREMENTS FOR MANUSCRIPTS SUBMITTED TO BIOMEDICAL JOURNALS (2001), available at (outlining standard conflict disclosure requirements).
165. See, e.g., McGarity, supra note 127, at 146 (arguing that the Food Quality Protection Act "appears to express a policy favoring EPA's use of less-than-perfect studies to support a decision to reduce pesticide risks to infants and children"). Good-science reforms might not be able to impair the credibility of at least some of the basic studies that support protective regulations, since these studies tend to be simple, focused, and employ accepted protocols. These basic studies are thus less vulnerable to methodological disputes than more complex epidemiology studies or cumulative risk studies.
166. See, e.g., Comprehensive Environmental Response, Compensation, and Liability Act § 113(h), 42 U.S.C. § 9613(h) (2000) (barring judicial review of challenges to EPA cleanups until after the cleanups have been completed).
167. "Default judgments" are the individual science-policy choices made at each of the points below the line in Figure 1. Accordingly, petitioners could attempt to argue in DQA complaints that agencies cannot regulate risks until each and every assumption (for example, in extrapolating from animal studies to humans or estimating average exposures) has been validated by scientific studies. These petitions challenge the foundation of protectionist regulation-namely that agencies can regulate without a risk being conclusively quantified-but do so under the charade of arguing that the agency has not used "good science" as the basis for its regulation. For a hint of these sorts of covert policy challenges, see infra notes 186-88 and accompanying text.
168. See infra note 187; see also NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 107-09 (comments of Dr. Joe Rodricks) (questioning whether the objectivity criterion in OMB's data quality guidelines might be used to challenge and even undercut default (or policy) choices in risk assessments); O'Reilly supra note 24, at 2, 20 (reporting that Ropes & Gray attorney Mark Greenwood argued at an ABA panel that "agency default assumptions that are built into a model can now be changed by petition from the persons affected, if the assumptions can be shown to lack 'quality' or 'objectivity' under the [DQA] criteria").
169. Experience reveals that even heavy reliance on a scientific advisory board can surreptitiously raise the bar on the evidence required to support protective standards. For example, Science Advisory Panel ("SAP") review of EPA's decisions to suspend or ban pesticides under FIFRA worked to raise the agency's burden of scientific proof to justify regulation through the Panel's discomfort with burdening registrants with residual uncertainties regarding pesticide risks. "Because of the Science Advisory Panel's . . . demand for 'good science,' it is harder to ban pesticides already on the market than to deny registrations to new products [since the SAP is not involved in the latter decisionmaking]." JASANOFF, supra note 40, at 151 (observing also how "SAP has grown increasingly skeptical about the appropriateness of regulating a commercially significant product solely on the basis of animal studies").
170. See Atrazine Petition, supra note 108.
171. EPA appears to be equivocating. See, e.g., U.S. ENVTL. PROTECTION AGENCY, INTERIM RISK MANAGEMENT DECISION FOR ATRAZINE 68 (Feb. 20, 2003), available at ("Based on the existing uncertainties in the available database, atrazine should be subject to more definitive testing once the appropriate testing protocols have been established.").
172. See supra note 97.
173. As Professor Finley argues,
[u]nder the guise of admissibility determinations, federal judges have been making significant substantive legal rules on causation by substantially raising the threshold of scientific proof plaintiffs need to get their expert causation testimony admitted, and thus survive summary judgment. . . . The emerging legal rule is that plaintiffs' experts must be able to base their opinions about causation on epidemiological studies, and that these studies standing alone must show that the population-wide risk of developing the disease in question, if exposed to defendants' products, is at least double the risk without exposure.
Lucinda M. Finley, Guarding the Gates to the Courthouse: How Trial Judges are Using Their Evidentiary Screening Role to Remake Tort Causation Rules, 49 DEPAUL L. REV. 335, 335 (1999); see also Siharath v. Sandoz Pharm. Corp., 131 F. Supp. 2d 1347, 1351 (N.D. Ga. 2001) (requiring the plaintiffs' multiple experts' testimony introduced to prove causation to meet "a reasonable degree of medical certainty" before the court would admit it under Daubert). The Supreme Court did not go quite this far in General Electric Co. v. Joiner, 522 U.S. 136 (1997), although it did open the door for courts to misapply the sufficiency standard and make judgments on causation in Daubert in limine hearings. See id. at 166-67 (holding that expert testimony is properly excluded when the trial court finds that "the studies upon which the experts relied were not sufficient, whether individually or in combination, to support their conclusions" in regard to causation).
174. In 2001, the district court in Siharath summarized the state of the law on the admissibility of testimony relying on animal studies for causation. 131 F. Supp. 2d at 1366. The court cited to a number of cases decided nationwide that excluded expert testimony relying on animal studies because "[e]xtrapolations from animal studies to human beings generally are not considered reliable in the absence of a credible scientific explanation of why such extrapolation is warranted." Id. (citations omitted). The court went on to explain that the "use of animal studies to prove causation in human beings has two distinct disadvantages. First, extrapolating from animals to humans is difficult because 'differences in absorption, metabolism, and other factors may result in interspecies variation in responses.'" Id. at 1366-67 (citations omitted). "Second, 'the high doses customarily used in animal studies requires consideration of the dose-response relationship and whether a threshold no-effect dose exists.'" Id. at 1367 (citations omitted). "To ensure that the expert's conclusion based on animal studies is reliable, there must exist 'a scientifically valid link between the sources or studies consulted and the conclusion reached.'" Id. (citations omitted). The court noted that even though some courts admitted testimony that extrapolated from animal studies "pre-Daubert," this is only permissible post-Daubert with a scientifically reliable explanation of why the extrapolation is warranted. Id. (citations omitted). The court ultimately concluded that "the animal studies at issue in this case . . . [do not meet] the necessary standard for reliability." Id.
175. See, e.g., Cranor & Eastmond, supra note 153, at 25, 32-34 (observing how courts sometimes make epidemiological studies necessary conditions of admissibility and give insufficient credence to animal studies as indicative of human carcinogenicity); Finley, supra note 173, at 348 (noting that "[t]he most noteworthy trend in using admissibility determinations to make substantive causation rules-noteworthy because it is seriously scientifically and legally misguided-is a growing number of court rulings stating that in order for an expert opinion about causation to be relevant and thus admissible, the expert must base her testimony on epidemiological studies that demonstrate the product in question at least doubles the risk of the disease from which plaintiff suffers"); Thomas O. McGarity, On the Prospect of "Daubertizing" Judicial Review of Risk Assessment, 66 LAW & CONTEMP. PROBS. 155, 156-78 (Autumn 2003) (describing this trend and arguing that proponents of regulatory Daubert intend this result, which runs "directly counter to the precautionary policies animating most health, safety, and environmental statutes").
176. Cf. Raul & Dwyer, supra note 29, at 34 (suggesting that some courts already "have articulated and followed Daubert-type principles").
177. See infra notes 194-98 and accompanying text.
178. Cf. NEIL K. KOMESAR, IMPERFECT ALTERNATIVES: CHOOSING INSTITUTIONS IN LAW, ECONOMICS, AND PUBLIC POLICY 7-8 (1994) (theorizing that individuals participate in transactions, whether political or economic, based on whether there are net benefits to participating, and thus the extent of an individual's participation is based on the difference between the benefits that will accrue to the person by participating and the costs of participation). Not surprisingly, the good-science reforms are the brainchild of regulated industry. See, e.g., NRC, DATA ACCESS REPORT supra note 54, at 2 ("Industry and regulated communities supported the [Data Access Amendment] as a fair way to challenge scientific studies that support costly regulations, tort suits, and dubious/questionable risk estimates. The scientific community opposed the amendment."); supra note 24 (discussing industry's role in drafting the Data Quality Act).

The implications of the Data Quality Act for state regulation and federalism is intriguing, but appears unexplored. Theoretically, if there were a problem of bad regulatory science, it would be just as pervasive in state regulation. See, e.g., O'Reilly, supra note 79, at 33 (observing that one of the limitations to the "completeness and accuracy of federal data" is that "[s]tate-based programs vary tremendously in their degree of vigorous attention to detail and persistence in obtaining the requested set of information from all affected entities within their jurisdiction"). State information also appears to provide much of the information upon which EPA acts. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 154 (comments of Ms. Stanley, U.S. Envtl. Protection Agency) (observing that EPA currently does not correct state data, but does send correction notices to the state, and reporting that "one estimate is that over 90 percent of the data . . . that is in our system is provided by states"). To the extent that the Data Quality Act reaches state data, then, it might require compliance with the Unfunded Mandates Act. Regardless, given the pivotal role states play in regulation, it is no surprise that the industry leaders who were instrumental in passing the Data Quality Act are now promoting similar "model" legislation at the state level. See, e.g., Industry Seeks to Extend Federal Data Quality Rules to States, CLEAN AIR REP. (Inside Washington Publishers, Arlington, Va.), Jan. 30, 2003 (on file with author).
179. Two-thirds of the petitions filed against EPA as of August 1, 2003 were filed by industry or industry-funded organizations such as the Competitive Enterprise Institute and the U.S. Chamber of Commerce. See OMB Watch, Data Quality Challenges, at (last visited Aug. 1, 2003). This is consistent with experience in other adversarial regulatory settings. In her case study of EPA's decisionmaking on Dicofol under FIFRA, Professor Jasanoff observes that only the manufacturer participated vigorously in the fact-finding. "If the proceedings before the SAP were structured as a 'battle of experts,' it was a battle in which only one side fielded an army. As a result, there was no public testing of the [company] experts with respect to the content of their submissions or their overall credibility." JASANOFF, supra note 40, at 140-41; see also SMITH, supra note 52, at 45 (observing that "as [science] advisory groups become important forums for debating and potentially influencing the direction of agency policies, the constituencies served or regulated naturally seek a role in who is appointed to advisory positions, how they relate to the agency, and what advice is given"). The attentive public is unlikely to have the time or resources to request data, re-analyze studies, or lodge meaningful complaints and seek corrections of specific methodological errors or take issue with other assumptions. See, e.g., NRC, DATA ACCESS REPORT, supra note 54, at 15 (recounting comments by invited panelist Wendy Baldwin, NIH, who concluded that "[a]lthough it is easy to understand the appeal of gaining access to 'original data,' [under the Data Access Amendment,] this access can serve little purpose for those without the skills to reanalyze it"). The reforms might also increase the costs of participating by requiring commentators to provide evidence that the information undergirding their comments is of high quality. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 179 (comments of Professor Pierce) (hypothesizing that "the Guidelines may wind up having a much greater adverse effect on parties who submit comments to agencies and otherwise try to influence agencies by submitting studies and other sources of information to agencies than it does directly on agencies"). While such impediments might inconvenience industry, they could make it difficult if not impossible for the general public to submit meaningful comments.
180. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 184 (comments of Professor Pierce) (quoting an unidentified lawyer, who observed that "[t]he [DQA] guidelines are going to be enormously damaging for an agency like EPA. Agency officials will be surprised at how industry will use this vehicle to slow things down"). Reinforcing the perception that these reforms are intended to benefit only a small group of well-financed stakeholders (namely, industry) are the exemptions carved into the reforms. As mentioned above, most industry data is outside the effective reach of the good-science reforms. See supra Part IV.A.2.b. By contrast, the occasion when the public would seem to be most adversely affected by the quality of agency science-the evalution of claims for disability and other medical-based entitlements-the agency is not required to ensure the validity of the science. Data Quality Guidelines, supra note 25, § V.8 (exempting from the requirements information distributed pursuant to "adjudicative processes"). In these exempted "adjudicatory" proceedings, the affected persons are generally members of the public, often acting "pro se" against the agency, as opposed to well-financed, regulated industries.
181. Interest groups that purport to represent the public might have to pick their battles since engaging in data re-analysis and review requires both time and expertise. Moreover, since the good-science reforms will only work to slow, rather than speed, the regulatory process, interest groups favoring protective regulations might not see net benefits to challenging the quality of agency science.
182. In fact, as of March 18, 2003, there appears to be only one DQA petition filed by those concerned with under-regulation, and this was a challenge to the information used to support a temporary exemption to regulatory requirements (where the delay in resolving the DQA complaint would not delay preventative regulation). See supra note 129.
183. As alluded to previously, the procedures can be abused-perfectly legally-simply to delay agency regulatory activities and harass scientists producing research that is adverse to the stakeholder's interests. See O'Reilly, supra note 79, at 31-32 (discussing the possibilities for abuse of data quality procedures to delay agency activities); see also supra notes 136-39 and accompanying text. The reforms could also be used to forestall enforcement actions by lawyers representing defendants in civil (and possibly criminal) suits brought by the government. See, e.g., O'Reilly, supra note 24, at 847 (observing that "[i]t is foreseeable that the defense sequence for lawyers defending post-2002 civil penalty cases will routinely involve discovery, Freedom of Information Act requests, section 515 complaints, and a request for stay pending outcome of the agency's response to the section 515 critique"). A symposium in Law and Contemporary Problems, now nearly a decade old, documented a similar abuse of ex parte subpoenas by private parties. See Symposium, Court-Ordered Disclosure of Academic Research: A Clash of Values of Science and Law, 59 LAW & CONTEMP. PROBS. 1 (Summer 1996). By filing third-party subpoenas against scientists, some litigants succeeded in slowing and even halting on-going research that was producing results contrary to the private parties' financial interests. See, e.g., Bert Black, Research and its Revelation: When Should Courts Compel Disclosure?, 59 LAW & CONTEMP. PROBS. 169, 173 (Summer 1996) (describing the use of subpoenas to harass scientists and chill research); Robert M. O'Neil, A Researcher's Privilege: Does Any Hope Remain?, 59 LAW & CONTEMP. PROBS. 35 (Summer 1996) (describing the harm to research and researchers from compelled disclosure and the lack of meaningful attention to the problem by the courts); see also supra note 138. Commenters in the symposium noticed that discovery procedures made abuse essentially costless to the party committing the abuse. Cf. Black, supra, at 183 (recommending that, to provide some disincentives for filing harassing third-party subpoenas, the moving party be required to pay the attorney fees for that expert "if a compromise offer is made to the party seeking disclosure and the court's ruling requires no disclosure beyond the offer"); O'Neil, supra, at 49 (concluding that the protections available to researchers from compelled disclosure are "tenuous and uncertain" and that scholars and their attorneys should continue to pursue "whatever protection the courts may afford to the quest for knowledge").
184. The only costs that may be passed on to a party filing a DQA complaint or data access request are the "reasonable" costs of producing data under the Data Access Amendment. Circular A-110, supra note 22, § .36(d)(1) (allowing but not requiring the awarding agency to "charge the requestor a reasonable fee equaling the full incremental cost of obtaining the research data[,] . . . [including] costs incurred by the agency, the recipient, and applicable sub-recipients."); see NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 99 (comments of Alan Morrison) (observing that one "can file as many correction requests as you want. You don't have a quota on correction requests. . . . What the agency would or should do with them is of course a more difficult situation.").
185. McGarity, supra note 175, at 178-221 (providing a detailed description of the tobacco industry's efforts to suppress and obfuscate scientific information regarding the hazards of environmental tobacco smoke).
186. See, e.g., Mary G. Kweit & Robert W. Kweit, The Politics of Policy Analysis: The Role of Citizen Participation in Analytic Decision Making, in CITIZEN PARTICIPATION IN PUBLIC DECISION MAKING 19 (Jack DeSario & Stuart Langton eds., 1987) ("Technocratic methods are tools that seem to limit the role of public participation. Through sophistry, these tools can be used to justify and reify the wishes of a few.").
187. The most blatant example is a February 2003 preemptive letter threatening complaint if EPA relied upon comments filed by the Natural Resources Defense Council. The industry group summarized their twelve-page list of complaints with the observation that "[m]any of NRDC's arguments are based on asserted [sic] need to inject policy bias into the risk assessment, which would be a violation of Data Quality requirements." Letter from William G. Kelly, Center for Regulatory Effectiveness, to EPA Water Docket (Feb. 27, 2003), available at Additionally, as mentioned above, see supra note 129, industry's efforts to exclude Dr. Hayes's research on the developmental effects of atrazine hinges on policy rather than scientific arguments. Both of industry's arguments in the DQA petition-that agencies must pre-approve protocols and research must be replicated before it can be used in regulatory decisionmaking-are predominantly policy, not technical, disagreements that in fact conflict with EPA's statutory mandate under FIFRA to place the burden of proving safety on the registrant. Other "scientific quality" complaints similarly take issue, at least in part, with the agency's value judgments or policy extrapolations used in a larger risk assessment. In one complaint, industry challenged EPA's barium risk assessment-in part because industry disagreed with the agency's conservative interpretation of the data. See Chemical Products Division, Request for Correction of the IRIS Barium Substance File (Oct. 29, 2002); Letter from Paul Gilman, Assistant Administrator, U.S. Envtl. Protection Agency, to Jerry Cook, Chemical Products Division (Jan. 30, 2003), available at In two other complaints, the Competitive Enterprise Institute argued that NOAA, and, by association, the Office of Science and Technology Policy, used flawed models to predict global warming and that all reports and data relying on that model should be withdrawn: The CEI did not, however, discuss whether other, more accurate predictive models are currently available. See supra notes 118, 129.
188. In theory, agencies could dismiss these covert attacks on policy choices by arguing that such challenges are not to "information," but are instead directed at agency policy. "Information" is defined in the OMB Data Quality Guidelines as "any representation of knowledge such as facts or data." OMB Data Quality Guidelines, supra note 25, at 8460.
189. See, e.g., PAUL SLOVIC, THE PERCEPTION OF RISK 316, 317 (2000) ("In recent years there have been numerous articles and surveys pointing out the importance of trust in risk management and documenting the extreme distrust we now have in many of the individuals, industries and institutions responsible for risk management."). Publicizing or "spinning" problems with the quality of science undergirding regulation, especially problems that are not supported with facts, can hardly help repair this tarnished reputation.
190. Id. at 319 (reporting that "[t]he asymmetry between the difficulty of creating trust and the ease of destroying it has been studied by social psychologists within the domain of interpersonal perception," and providing a description and mechanistic explanation of some of the studies).
191. The cost of delay is not considered an administrative cost, even though the social costs could be significant if delay does result. See, e.g., supra notes 108-11 and accompanying text.
192. For example, because of the Data Access Amendment, data sharing plans are required as a condition to obtaining large NIH grants. See NAT'L INSTS. OF HEALTH, NOT-OD-03-03, FINAL NIH STATEMENT ON SHARING RESEARCH DATA (Feb. 26, 2003), available at
193. See Andrew C. Revkin, Suit Challenges Climate Change Report by U.S., N.Y. TIMES, Aug. 7, 2003, at A21. The lengthy transcript from the NAS workshop on the DQA is replete with unanswered questions and hypotheticals concerning the implementation of the Act and the OMB Guidelines. For a presentation that seems to raise an especially long list of unanswered compliance questions, see NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 52 (presentation of Mr. Siskind, Dep't of Labor); id. at 119-21 (comments of Neil Cohen) (raising a hypothetical pointing out how competing stakeholders can attack agency information from opposing sides and completely discredit the information); id. at 58 (comments of Neil Cohen) (discussing pivotal issues such as the meaning of "data," and "reproducibility").
194. For example, an EPA study reports that in Fiscal Year 1993, EPA undertook a total of 7595 risk assessments. Of these, 249 were major risk assessments that required more than four person-weeks and 1180 were medium-level assessments requiring more than two person-days and less than four person-weeks. The remaining assessments (6166) were done in less than two days and consisted of abbreviated screening analyses conducted under TSCA. NAT'L ACAD. OF PUB. ADMIN., SETTING PRIORITIES, GETTING RESULTS: A NEW DIRECTION FOR EPA 37-39 (1995). At the first NAS workshop on the DQA, Health Effects Institute speaker Jim O'Keefe noted how expensive and time-consuming the HEI re-analysis of the Six Cities Studies was and how "most folks would agree" that such extraordinary measures are "rarely justified." NAS, DATA QUALITY TRANSCRIPT, DAY 2, supra note 9, at 18. Elaine Stanley, the director of EPA's Office of Environmental Information, reported that EPA's preexisting information correction system had generated 1000 submissions for correction over eighteen months. Only thirty percent provided enough information to assess the complaint, and of the original 1000, only twelve percent were deemed meritorious and resulted in a correction. The examples of meritorious correction requests Ms. Stanley provided were changes in regulated facilities' name or location or other identifying information. NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 151-52.
195. According to an industry website, only six DQA petitions of significance were filed against public health and environmental agencies as of March 18, 2003, roughly five months into the implementation of the Act. See Center for Regulatory Effectiveness, Status of Data Quality Act Petitions, at (last visited Sept. 11, 2003).
196. See supra note 187. The same organization filed a similar preemptive letter warning universities to ensure that their research meets the requirements of the DQA if they wish to receive continued federal funding. See supra note 139 and accompanying text.
197. See, e.g., NRC, DATA ACCESS REPORT, supra note 54, at 27 (comments of Richard Merrill) (expressing concern that "[t]here is no 'need-to-know' requirement. The mere desire of the request is sufficient to trigger the obligations of the Shelby Amendment."). The distinction under the DQA between influential and non-influential information, see supra note 26, is presumably a recognition that the benefits of better data quality differ in different contexts, but this single sorting device seems insufficient, standing alone, to ensure that improving the quality of information has tangible benefits in relation to the administrative costs of that improvement. Cf. Carl F. Cranor, The Social Benefits of Expedited Risk Assessments, 15 RISK ANALYSIS 353, 357 (1995) (arguing with elaborate models that "science-intensive, case-by-case assessments . . . are necessary only if the costs of regulating a substance are quite high compared to the risks to health . . . . [I]t appears better to evaluate a larger universe of known carcinogens somewhat less intensively for each substance than to evaluate a small proportion of that same universe very carefully and delay considering the rest.").
198. Unjustified diversion of resources and the resultant effect on agency priorities is a common concern with respect to the Data Quality Act. See, e.g., NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 113-14, 138-40 (comments of Fred Anderson and Dan Cohen) (remarking on the potential for the DQA to divert agency resources and priorities with little perceptible benefit). The cost-effectiveness of the good-science reforms is also unclear. For example, information need not always be of the highest quality, and failure of the good-science reforms to take this into account-other than in a very superficial way through the "influential information" distinction-could lead to the imposition of unjustified costs on society. Cf. NAS, DATA QUALITY TRANSCRIPT, DAY 1, supra note 24, at 165 (comments of Jim Harris) (commenting on the high cost of quality information, but allowing that "some of these things[,] no matter how much money you spend[,] you are never going to resolve"). When one adds the unanticipated adverse consequences (perhaps the equivalent of risk-risk tradeoffs) that flow from implementation of the reforms, the costs might rise higher still. See generally supra Part III.B.1.
199. See supra notes 29-30 and accompanying text.
200. Cf. McGarity, supra note 175, at 210-21 (describing the tedious review that the district court in Flue-Cured engaged in for EPA's risk assessment of environmental tobacco smoke, a review that likely epitomizes regulatory Daubert).
201. The proponents of regulatory Daubert are unclear on whether they are suggesting the equivalent of admissibility hearings on challenged information, although some proposals seem to suggest this approach. See, e.g., Weller & Graham, supra note 29, at 10,568, 10,572 (arguing that Daubert applies to the information supporting agency rulemakings and endorsing Daubert hearings for scientific evidence in all types of environmental litigation).
202. See, e.g., Ellen Relkin, To Hear or Not to Hear: When Are Daubert Hearings Appropriate?, SF78 ALI-ABA 371, 375 (2001) (reporting that Daubert hearings can range from a few hours to numerous days and have evolved into virtual mini-trials involving a myriad of experts from both sides that can cost parties "tens to hundreds of thousands of dollars" and observing that the costs of Daubert hearings are being factored into plaintiffs attorneys' decisions to reject meritorious cases when the injuries are not catastrophic); id. at 381 (reporting that the defendants' costs of a Daubert hearing, which the court assigned to the losing plaintiffs, were $87,887.11 in one case and $26,921.62 in another); see also Denise M. Dunleavy, The Darwin Guide to Survival at a Daubert Challenge, 2 ANN. 2001 ATLA-CLE 2775 (2001) (providing lengthy recommendations for anticipating and then preparing for Daubert hearings, which resemble mini-trials).
203. Much depends on how the Daubert reform would be implemented. For example, Alan Raul anticipates that Daubert could be used much more modestly as a probe to force agencies to be clearer about their policy judgments. He argues that his form of regulatory Daubert is intended to "promote the full disclosure of all of the agency's underlying principles, assumptions, and facts and obligate the agency to come completely clean on the foundation for its scientific decision. Following that full disclosure, the agency is entitled to policy deference on the scientific foundation for its decisions." Elliott et al., supra note 29, at 10,130. Presumably, judicial scrutiny of agencies' scientific judgments will lead them to be clearer about the role policy plays in their decisions, especially if policy choices are granted greater deference. This might be a perfect antidote if judicial review worked perfectly and courts were able to make this distinction and agencies willing to help them. Instead, though, existing evidence suggests the opposite is true. See supra Parts IV.A.1.a, IV.A.2.a. In any case, if this is the extent of Raul's version of regulatory Daubert-particularly given the other published proposals on regulatory Daubert-it might deserve a more innocuous label (perhaps "hard look at agency facts").

The more intrusive forms for regulatory Daubert that seek to exclude evidence are considerably more problematic with respect to ensuring positive regulatory outcomes. See supra note 29. They also might fail to address the underlying problem. Unlike exclusions of testimony at trial, there is nothing to keep agencies from considering studies in secret. And courts certainly cannot force agency staff to disgorge insights previously gleaned from excluded studies simply by striking down such studies after the fact and remanding the rulemaking to the agency. See, e.g., STEPHEN BREYER, BREAKING THE VICIOUS CIRCLE: TOWARD EFFECTIVE RISK REGULATION 58 (1993) ("[A]side from precedential effects, an adverse court decision [overturning an agency rule] usually means a remand, which as in the Benzene Case, may lead to several more years of 'corrective rule-making proceedings' (as Richard Merrill calls them) to reach basically similar results.") (citation omitted).
204. Before the recent spate of good-science bills, culminating in the Data Access and the Data Quality Acts, members of Congress repeatedly targeted the quality of agency science in their regulatory reform efforts. The most notorious efforts, which ultimately failed, were during the 104th Congress. While "transparency" was among the goals, most of the provisions were directed at expanding peer review, increasing the procedural requirements for decisionmaking, adding process requirements, and enlarging the avenues available for judicial review of an agency's determinations. See, e.g., Job Creation and Wage Enhancement Act of 1995, H.R. 9, 104th Cong., § 402(4) (finding that agencies should do a better job in risk assessments of collecting, organizing, and evaluating scientific and other data); § 414(b)(1) (mandating that agencies include a discussion of conflicts in existing studies and data); §§ 414(b)(2), 441 (providing for judicial review of agencies' compliance with procedures); § 431 (expanding peer-review requirements for agency rulemakings); Bob Benenson, House Easily Passes Bill To Limit Regulation, 53 CONG. Q. WKLY. REP. 681-82 (1995) (quoting Republican Robert Walker, who touts H.R. 9 as legislation designed to ensure that "good science" determines appropriate levels of regulation). In subsequent Congresses, members continued to sponsor bills that purported to improve the quality of agency science, again without success until the Shelby Amendment. See, e.g., Steve P. Calandrillo, Responsible Regulation: A Sensible Cost-Benefit, Risk Versus Risk Approach to Federal Health and Safety Regulation, 81 B.U. L. REV. 957, 1014-15 (2001) (discussing the repeated congressional efforts between 1997 and 2000 to require added risk and cost-benefit analysis requirements to regulatory decisionmaking).
205. In the risk assessment world, this working assumption is known as a "default" or "inference option," which is necessarily based on policy or a mix of science and policy due to the limits of science in predicting the effects of toxic substances on human health. See, e.g., COMM. ON RISK ASSESSMENT OF HAZARDOUS AIR POLLUTANTS, NAT'L RESEARCH COUNCIL, SCIENCE AND JUDGMENT IN RISK ASSESSMENT 7 (1994) [hereinafter SCIENCE AND JUDGMENT] (stating that default options "are used in the absence of convincing scientific knowledge on which of several competing models and theories is correct").
206. In extrapolating from high-dose studies on animals to possible low-dose effects on humans, some type of dose-response curve must be selected, but since there is generally no way to study low-dose effects, the appropriate curve is based in the end on policy considerations. As a working default, EPA selects a linear dose-response curve for strong and intermediate carcinogens and other select substances, meaning that the response to a toxin increases in direct proportion to the dose of the toxin. See, e.g., U.S. Envtl. Protection Agency, Proposed Guidelines for Carcinogenic Risk Assessment, 61 Fed. Reg. 17,960, 17,981 (proposed Apr. 23, 1996) (recommending this "default" assumption of linearity when there is evidence of adverse effects, but there is not evidence to support an assumption that the dose-response relationship is nonlinear); id. at 17,986-90 (listing seven examples of types of substances subject to risk assessment and recommending a linear default for four of the seven); see also U.S. Envtl. Protection Agency, National Primary Drinking Water Regulations, 66 Fed. Reg. 6976, 7004 (Jan. 22, 2001) ("The use of a linear procedure to extrapolate from a higher, observed data range to a lower range beyond observation is a science policy approach that has been in use by Federal agencies for four decades."). Also central to this working assumption is the corollary assumption that animals provide a reliable surrogate for assessing the effects of a toxin on human. See, e.g., SCIENCE AND JUDGMENT, supra note 205, at 86.
207. See, e.g., Richard C. Barnard et al., The Time has Come for Reconsidering the Role of Generic Default Assumptions Based on "Conservative Policy Choice" in Scientific Risk Assessments, 31 Envtl. L. Rep. (Envtl. L. Inst.) 10,873, 10,873 (July 2001) (acknowledging the scientific and value basis for "conservative policy choices" in risk assessments such as the linear dose-response curve, but arguing that EPA's decisionmaking should be more transparent); Adam M. Finkel, A Second Opinion on an Environmental Misdiagnosis: The Risky Prescriptions of Breaking the Vicious Circle, 3 N.Y.U. ENVTL. L.J. 295, 340-52 (1995) (arguing that the conservative default options are supported by some scientific knowledge); Cass R. Sunstein, The Arithmetic of Arsenic, 90 GEO. L.J. 2255, 2282 (2002) (acknowledging that the linear dose-response curve is "as sensible as any other, but it is not much more than a hunch").
208. Not surprisingly, many commentators dedicate considerable effort to challenging the working assumption, but they remain stymied by the absence of viable alternative protective assumptions. See, e.g., Frank B. Cross, The Consequences of Consensus: Dangerous Compromises of the Food Quality Protection Act, 75 WASH. U. L.Q. 1155, 1201-03 (1997) (advocating greater attention to other dose-response models, including hormetic dose-response relationships, where low concentrations of toxins have beneficial effects); see also BREYER, supra note 203, at 42-45 (1993); Barnard et al., supra note 207, at 10,873; Sunstein, supra note 207, at 2279-82.
209. See infra Part V.B.1.
210. See infra notes 229-30 and accompanying text.
211. PHILLIP L. WILLIAMS ET AL., PRINCIPLES OF TOXICOLOGY 456 (2000) (observing that "[b]ecause the shape of the dose-response curve in the low-dose region cannot be verified by measurement, there is no means to determine which shape is correct").
212. See, e.g., James W. Conrad, Jr., The Reverse Science Charade, 33 Envtl. L. Rep. (Envtl. L. Inst.) 10,306, 10,306 (Apr. 2003) (counsel for the American Chemistry Council arguing that agencies "exaggerat[e] the limitations of science, and risk analysis, in order to justify regulation on the basis of policy choices"); id. at 1308 (arguing that PBPK/PD computer models used to approximate dose-response for individual chemicals reduce uncertainties, although not arguing (or supporting) that the models in total can be validated, only that certain measurements for data inputs can be); Sean M. Hayes et al., Potential uses of PBPK Modeling to Improve the Regulation of Exposure to Toxic Compounds, RISK POL'Y REP. (Inside Washington Publishers, Arlington, Va.), Dec. 18, 1998, at 37 (same regarding incomplete validation of PBPK models); infra note 239.
213. See Indus. Union Dep't v. Am. Petroleum Inst., 448 U.S. 607 (1980) [hereinafter the Benzene Case]; infra notes 241-45 and accompanying text (explaining this quantum problem as it arose in the Benzene Case).
214. See BREYER, supra note 203, at 44-45 (discussing with concern EPA's adoption of a linear, no- threshold model for carcinogens). Justice Breyer actually does not suggest that the problem of over-regulation of certain substances is due to bad science in the agency. Id. at 11 (stating that EPA "generally receives high marks for its work," but pointing to institutional design problems that lead to tunnel vision, random agency selection, and inconsistency). Justice Breyer's discussion of "tunnel vision," however, can be re-characterized as a quantum problem; under precautionary statutes like CERCLA, the agency does not address how much evidence is enough to suggest that a particular risk (that is presumed harmful) can be tolerated, even with prevailing uncertainties. See id. at 11-19 (arguing that regulators insist on safety to the point where it does more harm than good and that this results in the "expenditure of considerable effort to achieve results that save very few lives at very high cost"); id. at 51 (finding that "'err on the safe side' scientific canons and default assumptions suggesting prevalent danger . . . [make] it difficult for agencies to resist overkill and random agenda setting"); see also id. at 65 (discussing the need to define and establish de minimis risk that will be tolerated). The quantum problem also helps explain Breyer's second concern-random agency selection that causes numerous risks to remain unaddressed. Id. at 19-20 (observing that "[s]ome critics point out that, of the more than sixty thousand chemical substances potentially subject to regulation, only a few thousand have undergone more than crude toxicity testing") (citations omitted). This again can be explained by the amount of available evidence being so slim as to be insufficient (in the agency's view) to support a protective regulation. This under- and over-regulation critique is also leveled, in a somewhat similar fashion, by John Mendeloff, who argues that the risks that are subject to regulation are "over-regulated," while far more risks are simply ignored. JOHN M. MENDELOFF, THE DILEMMA OF TOXIC SUBSTANCE REGULATION: HOW OVERREGULATION CAUSES UNDER REGULATION AT OSHA 2-3 (1988); see also CASS SUNSTEIN, BEYOND THE PRECAUTIONARY PRINCIPLE, 2, 9, 10 (John M. Olin Law & Econ. Working Paper, No. 149 (2d Series), 2002), available at (arguing that the precautionary principle "is literally paralyzing, forbidding regulation, inaction, and every step in between" and noticing later through examples, that "[i]f the burden of proof is on the proponent of the activity or processes in question, the precautionary principle would seem to impose a burden of proof that cannot be met"). As discussed below, the rebuttal problem provides a better fitting diagnosis for these recurring concerns.
215. For example, most of the regulatory-Daubert proponents openly concede that the goal of their proposal is to address the "problem" of over-regulation, rather than the problem of poor science in regulation. See Weller & Graham, supra note 29, at 10,585-59. Over-regulation seems more likely to come from an agency that regulates with too little information (a quantum problem), than an agency that regulates with information of poor quality (a quality problem). The failure of these reform proponents to identify problems with the quality of agency science further suggests that, at bottom, their concerns are with the amount-rather than the quality-of evidence needed to justify regulation. See supra Part III.A; see also Conrad, supra note 212, at 10,311-14 (expressing frustration that EPA adheres to its cautious policy assumptions in the face of scientific advancements in modeling toxicity).
216. See, e.g., SMITH, supra note 52, at 202 ("Some of the complaints about the erosion of science's authority in the public policy process are based on this mistaken assumption that policy conclusions follow inexorably from the scientific facts."); cf. Charles E. Lindblom, The Science of "Muddling Through," 19 PUB. ADMIN. REV. 79, 83-84 (1959) (providing a descriptive account of how public officials make decisions under conditions of very limited information and proposing a normative strategy for decisionmaking, called "successive limited comparisons," that instructs how officials can select among several short-term options).
217. See supra Part IV.B.2.
218. A widely hailed Organization for Economic Cooperation and Development ("OECD") report on how to conduct policy analysis, especially in the context of complex, technical regulatory issues, establishes problem identification as a critical first step in its set of principles for regulatory decisionmaking. See, e.g., Office of Mgmt. & Budget, Draft Report to Congress on the Costs and Benefits of Federal Regulations; Notice, 67 Fed. Reg. 15,014, 15,031 (Mar. 28, 2002) (reproducing the OECD principles for regulatory decisionmaking); see also RALPH KEENEY, VALUE FOCUSED THINKING vii-ix, 29-30, 44-51 (1992) (highlighting the benefits of value-focused thinking and discussing how neglecting a universal map of the goals, problems, and possible solutions can result in wrong-headed decisions).
219. See supra Part III.C.
220. See, e.g., Page, supra note 90, at 209-10 (describing "[t]he potential for catastrophic costs" as "the second common characteristic of environmental risk" that requires an anticipatory form of policy intervention).
221. ROBERT V. PERCIVAL ET AL., ENVIRONMENTAL REGULATION: LAW, SCIENCE, AND POLICY 72 (3d ed. 2000) (arguing that the "inadequacies of the common law" help explain the "rapid growth of regulatory legislation").
222. See supra Part III.C; see also Safe Drinking Water Act, 42 U.S.C. § 300g-1(b)(1)(A)(i)-(iii) (2000) (ordering that, to promulgate a "maximum contaminant level goal," EPA must show that "the contaminant may have an adverse effect on the health of persons" and "is known to occur or there is a substantial likelihood that the contaminant will occur in public water systems," and that "regulation of such contaminant presents a meaningful opportunity for health risk reduction") (emphasis added); Resource Conservation and Recovery Act, 42 U.S.C. § 6903(5) (2000) (defining for purposes of the statute "hazardous wastes" as those "which because of [their] quantity, concentration, or physical, chemical, or infectious characteristics may (A) cause, or significant contribute to an increase in mortality or an increase in serious irreversible, or incapacitating reversible, illness; or (B) pose a substantial present or potential hazard to human health or the environment when improperly treated, stored, transported, or disposed or, or otherwise managed"); SHAPIRO & GLICKSMAN, supra note 89, at chs. 3, 32-33 (identifying the lower evidentiary triggers for the CAA, the Occupational Safety and Health Act, SDWA, RCRA, and the CWA.).
223. See supra note 206.
224. See generally SHAPIRO & GLICKSMAN, supra note 89, at 35 (observing based on a survey of environmental and public health regulatory statutes that "[m]ost of the laws (16 of 22) use triggers that create less than the maximum evidentiary burden and, in particular, most fall in the middle categories-risk threshold or significant risk threshold").
225. Under at least one statute, only a single animal bioassay (or less) is needed to trigger this zero level of regulatory tolerance. This presumptive zero standard is most clear for the Delaney Clause. See supra note 91 and accompanying text; cf. McGarity, supra note 127, at 136 (discussing how a House Commerce Committee Report on FQPA suggests that "the committee expected that 'the Administrator will interpret an ample margin of safety to be a 100-fold safety factor applied to the scientifically determined 'no observable effect' level when data are extrapolated from animal studies'") (citations omitted). This presumptive zero standard reappears in EPA's approach to setting MCLGs under SDWA. See, e.g., Natural Resources Defense Council v. EPA, 824 F.2d 1211, 1215 (D.C. Cir. 1987) (in challenge to various MCLGs, quoting and ultimately upholding EPA's position that "'it believed a [recommended level] of zero was more consistent with the [Act's] mandate and the legislative history.' . . . [based on the statutory directive to] 'prevent known or anticipated effects with a margin of safety'"). Judge Williams of the D.C. Circuit has suggested that a presumptive zero standard is a plausible interpretation of section 109 of the CAA, which requires the agency to set ambient air standards for criteria pollutants, at least for non-threshold pollutants. See supra note 95. But see Natural Resources Defense Council v. EPA, 824 F.2d 1146 (D.C. Cir. 1987) (en banc) [hereinafter the Vinyl Chloride Case] (holding that old section 112 of the Clean Air Act, which provided similar language for air toxins, did not require a presumptive zero standard for non-threshold air pollutants). Finally, although not translated into enforceable requirements, Congress set a goal of zero pollution for point source discharges into surface waters in the Clean Water Act. See 33 U.S.C. § 1251(a)(1) (2000) ("It is the national goal that the discharge of pollutants into the navigable waters be eliminated by 1985.").
226. This two-step process-justifying and then rebutting a protective regulation-is different from, but still related to the trigger/target model of regulation pioneered by John Applegate, see John S. Applegate, Worst Things First: Risk, Information, and Regulatory Structure in Toxic Substances Control, 9 YALE J. ON REG. 277, 305-06 (1992) (referring to the evidence needed by an agency to justify protective regulation as the "predicate" and to the second step of identifying the appropriate level of protection as the "target"), and elaborated on by SHAPIRO & GLICKSMAN, supra note 89, at ch. 3. These prior models look to the substantive criteria that an agency may consider in justifying and then refining a regulation (the target), with emphasis on the role economic analysis may play in the decisionmaking. The analysis here considers only the complications that arise from rebutting the protectionist assumption of no-safe-dose, an assumption that arises in both steps but becomes most problematic during the second step, when it implies a zero or near-zero protective standard.
227. Advocates of alternative dose-response models generally concede this fact. See supra note 212 and accompanying text; infra note 239 and accompanying text.
228. For an articulation of precisely these concerns regarding Justice Stevens's opinion in the Benzene Case, see infra notes 241-45 and accompanying text; SUNSTEIN, supra note 214.
229. See Clean Water Act, 33 U.S.C. § 1311(b)(2)(C)-(D) (2000) (referencing the House Committee Report for a list of 126 toxic substances for which technology-based standards must be promulgated); Clean Air Act, 42 U.S.C. § 7412(b) (2000) (listing 189 air toxins for which technology-based standards must be promulgated). Although EPA is required to set science-based goals ("maximum contaminant level goals") under the Safe Drinking Water Act, 42 U.S.C. § 300g-1(b)(4)(A) (2000), the enforceable standard is based on the level of protection closest to the MCLG that is "feasible"-called the "maximum contaminant goal," id. at § 300g-1(b)(4)(B). In 1996, Congress added a set of cost-benefit requirements to the MCL determination, causing it to come closer to a cost-benefit standard than a feasibility-based standard. See id. at § 300g-1(3)(C)(i). The latter does not consider the benefits of health protection as part of the standard-setting calculation. In the Delaney Clause, Congress also circumvented the rebuttal problem by making the standard irrebuttable. See supra note 225.
230. See, e.g., Wendy E. Wagner, The Triumph of Technology-Based Standards, 2000 ILL. L. REV. 83, 84-85, 96-97 (noting how technology-based standards avoid basing regulatory limits on environmental or public health needs).
231. For examples of precautionary mandates that resist rebuttal because of uncertain science, see, for example, the Federal Insecticide, Fungicide, and Rodenticide Act, 7 U.S.C. § 136a(c)(5)(D) (2000) (allowing pesticides to be registered only if the administrator finds that "when used in accordance with widespread and commonly recognized practice it will not generally cause unreasonable adverse effects on the environment"); Toxic Substance Control Act, 15 U.S.C. §§ 2604(f)(1), 2605(a) (2000) (authorizing regulatory action on new and existing toxic substances "[i]f the Administrator finds that there is a reasonable basis to conclude that the manufacture, processing, distribution in commerce, use, or disposal of a chemical substance or mixture, or that any combination of such activities, presents or will present an unreasonable risk of injury to health or the environment"); Food Quality Protection Act, 21 U.S.C. § 346a (2000) (mandating that a protective standard for pesticide residues is rebutted only once "there is a reasonable certainty that no harm will result from aggregate exposure to these residues"); Clean Water Act, 33 U.S.C. § 1313(c)(2)(A) (2000) (requiring that water quality standards set by states "be such as to protect the public health or welfare"); Safe Drinking Water Act, 42 U.S.C. § 1412(b)(4) (2000) (requiring that maximum drinking water contaminants levels be "set at the level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety"); Resource Conservation and Recovery Act, 42 U.S.C. § 6924(m) (2000) (requiring that standards for treatment of hazardous wastes disposed onto land specify "those levels or methods of treatment, if any, which substantially diminish the toxicity of the waste or substantially reduce the likelihood of migration of hazardous constituents from the waste so that short-term and long-term threats to human health and the environment are minimized"); Clean Air Act, 42 U.S.C. § 7409(b)(1) (2000) (ording that ambient air quality standards for criteria pollutants must "protect the public health" "allowing an adequate margin of safety").
232. Congress's "answer" can be found at 42 U.S.C. § 9621(b) and (d), which list vague "factors" EPA should consider in making cleanup determinations. See also CLEAN SITES, IMPROVING REMEDY SELECTION: AN EXPLICIT AND INTERACTIVE PROCESS FOR THE SUPERFUND PROGRAM B-14 (1990) (identifying "[n]umerous problems associated with the criteria and the remedy selection process" including "inconsistency in decision-making, inconsistency in compliance with ARARs, lack of clear cleanup objectives, . . . inappropriate use of cost criterion, failure to implement permanent and treatment remedies, poor justification for selected remedies, and selection of unproven technologies"). This unanswered "how clean is clean" question, of course, is one of the significant causes of the brownfields problem. See generally William W. Buzbee, Remembering Repose: Voluntary Contamination Cleanup Approvals, Incentives, and the Costs of Interminable Liability, 80 MINN. L. REV. 35, 58-61 (1995) (discussing the limited substantive guidance provided by CERCLA and the implementing regulations in determining "how clean is clean" and how this leads to people being risk-averse in the purchase of contaminated properties); Joel B. Eisen, "Brownfields of Dreams?:" Challenges and Limits of Voluntary Cleanup Programs and Incentives, 1996 U. ILL. L. REV. 883, 907 (observing that "it is nearly impossible to determine in advance the required level or cost of a cleanup under CERCLA).

A similar dilemma occurs with respect to determining the point at which a CERCLA site is considered "dirty" in the first place and therefore a potentially source of liability. This goes more to the quantum of evidence required to support regulatory action rather than that required to rebut it, but the two measures overlap in this case. The statute alerts parties that a site can be the source of cleanup if there is a "release or a threatened release of a hazardous substance" that "causes the incurrence of response costs that are not inconsistent with the National Contingency Plan." § 9607(a).
233. See, e.g., Resource Conservation and Recovery Act, 42 U.S.C. § 6921(f) (2000). After twenty years of definitional effort (motivated by lawsuits), EPA has finally promulgated a rule (known to RCRA connoisseurs as the "mixed and derived from rule") that establishes the point that a waste mixed or derived from a RCRA "listed hazardous waste" is no longer considered hazardous. U.S. Envtl. Protection Agency, Final Rule, Hazardous Waste Identification Rule ("HWIR"): Revisions to the Mixture and Derived-From Rules, 66 Fed. Reg. 27,266 (May 16, 2001) (codified at 40 C.F.R. pts. 261, 268). For an account of EPA's tortured effort to identify the appropriate rebuttal point, an effort which included multiple notices for comment and a negotiated rulemaking, see Christopher J. Urban, EPA's Hazardous Waste Identification Rule for Process Waste (HWIR-Waste) Gone Haywire, Again, 9 VILL. ENVTL. L.J. 99 (1998) (providing a detailed account of the "ten year controversy" surrounding the identification and regulation of hazardous wastes and spotlighting the general perception that the failure of EPA to provide rebuttal criteria for listed wastes led to costly over-regulation).
234. In the Clean Air Act, Congress gave EPA some general direction on determining the point at which listed toxins can be deleted from the list. See, e.g., 42 U.S.C. § 7412(b)(3)(C) (2000) (allowing EPA to delist a listed toxic air pollutant "upon a showing by the petitioner" "that there is adequate data on the health and environmental effects of the substance to determine that emissions . . . of the substance may not reasonably be anticipated to cause any adverse effects to the human health or adverse environmental effects"). EPA interprets this mandate to require scientific evidence that the listed substance does not cause adverse effects, a showing that is obviously difficult given the numerous residual uncertainties that plague our understanding of toxicity. In American Forest and Paper Ass'n v. EPA, 294 F.3d 113 (D.C. Cir. 2002), the D.C. Circuit affirmed EPA's refusal to delete a hazardous air pollutant absent compelling evidence that the substance will not cause harm. While these interpretations seem fully appropriate, they make the possibility of rebutting the presumption of "hazardousness" illusory except in the most extraordinary circumstances because they require proof of the negative (that is, proof that a substance will not cause any type of harm).
235. There is considerable variation among statutory mandates in their overarching directions for rebuttal determinations. At least one mandate, the Delaney Clause for color additives under the Food, Drug, and Cosmetic Act, does not allow for any rebuttal and requires protection based on only a single adverse study. See 21 U.S.C. § 379e(b)(5)(B) (2000); supra note 91. Other mandates provide latitude for rebuttal evidence, but seem to require that the rebuttal science be substantial enough to convince regulators "beyond a reasonable doubt" that no significant harm will occur. See, e.g., Safe Drinking Water Act, 42 U.S.C. § 1412(b)(4) (2000) (requiring that maximum drinking water contaminant levels be "set at the level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety"); Clean Air Act, 42 U.S.C. § 7409(b)(1) (2000) (requiring that ambient air quality standards for criteria pollutants "protect the public health" "allowing an adequate margin of safety"). Still other statutes can be read to suggest that a presumptively protective regulation is rebutted when the regulator can conclude, by balancing the available science against remaining uncertainties, that harm is less than fifty percent likely. See, e.g., Federal Insecticide, Fungicide, and Rodenticide Act, 7 U.S.C. § 136a(c)(5)(D) (2000) (allowing pesticides to be registered only if the administrator finds that "when used in accordance with widespread and commonly recognized practice[, the pesticide] will not generally cause unreasonable adverse effects on the environment"); Toxic Substance Control Act, 15 U.S.C. §§ 2604(f)(1), 2605(a) (2000) (authorizing regulatory action on new and existing toxic substances "[i]f the Administrator finds that there is a reasonable basis to conclude that the manufacture, processing, distribution in commerce, use, or disposal of a chemical substance or mixture, or that any combination of such activities, presents or will present an unreasonable risk of injury to health or the environment").
236. For example, under at least three mandates (the CAA, the CWA, and RCRA), agencies struggled with the rebuttal problem until they finally adopted a technology-based standard to rebut the presumption of protection, an approach that Congress had not suggested in the authorizing statute. Two of the three efforts (under the CWA and RCRA) were upheld in the courts, and in one case, the courts seemed to encourage the result. See Hazardous Waste Treatment Council v. EPA, 886 F.2d 355, 363 (D.C. Cir. 1989) (holding that "EPA's catalog of the uncertainties inherent in the alternative approach using [health-based] screening levels [under the RCRA land ban provision] supports the reasonableness of its reliance under BDAT ["best demonstrated available technology"] instead"); John P. Dwyer, The Pathology of Symbolic Legislation, 17 ECOLOGY L.Q. 233, 251-57 (1990) (describing EPA's economic and technology-based approach to air toxins before the D.C. Circuit struck it down in the Vinyl Chloride Case, 824 F.2d 1146 (D.C. Cir. 1987) (en banc)); Ridgway M. Hall, Jr., The Evolution and Implementation of EPA's Regulatory Program to Control the Discharge of Toxic Pollutants to the Nation's Waters, 10 NAT. RESOURCES J. 507, 519-25 (1977) (discussing EPA's technology-based approach to regulating water toxins that resulted from a court-entered settlement to resolve litigation over EPA's inadequate implementation of the statutory mandate requiring CWA standards to be based on the quality of the receiving waters).

The court in the Vinyl Chloride Case, however, missed this rebuttal problem. It struck down EPA's approach to coming to terms with the problem-by promulgating technology-based standards-and returned the impossible mandate to EPA with no guidance. In the words of the court,
[h]ad Congress intended that result [of a zero standard for air toxins argued by NRDC,] it could very easily have said so by writing a statute that states that no level of emissions shall be allowed as to which there is any uncertainty. But Congress chose instead to deal with the pervasive nature of scientific uncertainty and the inherent limitations of scientific knowledge by vesting in the Administrator the discretion to deal with uncertainty in each case.
Vinyl Chloride, 824 F.2d at 1152; cf. Kevin L. Fast, Treating Uncertainty as Risk: The Next Step in the Evolution of Environmental Regulation, 26 Envtl. L. Rep. (Envtl. L. Inst.) 10,627, 10,627 (Dec. 1996) (arguing that "use of reference concentrations (RfCs) in [EPA's general methods for] risk assessment has provided EPA with a basis for attempting to justify health-based regulation even where no evidence of risk to public health exists, and thereby, to shift to the regulated community the burden of proving the absence of risk in order to avoid regulation" and urging the regulated community to oppose this practice wherever possible).

EPA has also gone to great lengths to address the brownfields problem that has grown out of the rebuttal problem under CERCLA, see supra note 231, a problem that Congress only recently addressed in its Brownfields Law, Pub. L. No. 107-118, 115 Stat. 2356 (2002), nearly a decade after the problem emerged. See generally Jonathan D. Weiss, The Clinton Administration's Brownfields Initiative, in BROWNFIELDS: A COMPREHENSIVE GUIDE TO REDEVELOPING CONTAMINATED PROPERTY 41 (2002) (discussing EPA initiatives). Additional examples of regulatory struggling are detailed in the subsections that follow. See also POWELL, supra note 15, at 224 (recounting an EPA drinking-water official's hand-wringing over the appropriate rebuttal for arsenic in drinking water and the official's admission that what is unresolved is "'what's adequate data for departing from the default . . . . We have resolved to pursue additional research but are holding off the decision as to what constitutes enough.'").
237. This was the issue in Chlorine Chemistry v. EPA, 206 F.3d 1286 (D.C. Cir. 2000). See supra notes 31, 33; infra notes 268-69; cf. Frank B. Cross, Incorporating Hormesis in Risk Regulation, 30 Envtl. L. Rep. (Envtl. L. Inst.) 10,778 (Sept. 2000) (arguing that the decision in Chlorine Chemistry is a "first step toward the incorporation of hormesis [a J-shaped dose-response curve] into carcinogen regulation").
238. See, e.g., U.S. Envtl. Protection Agency, Guidelines for Carcinogen Risk Assessment, 51 Fed. Reg. 33,992, 33,999 (Sept. 24, 1986) (providing guidance on how to conduct "weight of the evidence" determinations on classifying carcinogens and identifying several quality criteria that must affect the assessment, such as whether there is an "identified bias that could explain the association"); id. at 34,002 (responding to commenters' concerns that EPA should consider "all available data" by clarifying its guidelines to state this explicitly); cf. McGarity, supra note 127, at 146 (discussing how FQPA directs EPA to "consider all 'available information' on the susceptibilities of infants and children to pesticide residues" and that "[a]n imperfect study is 'available information,' even if not one hundred percent 'reliable'").
239. A recent article critiques EPA's use of zero default standards because they do not take full advantage of advancements in science. Put in terms of the rebuttal problem, the authors are in part arguing that EPA (a) should require less definitive evidence before it rebuts a protective mandate, and (b) should integrate the body of science available into its rebuttal assessment. See Barnard et al., supra note 207, at 10,873 ("EPA should [as part of the rebuttal determination] conduct an uncertainty analysis and disclose the impact of the risk assessment of both the default and plausible alternative assumptions in each case."). The authors impose their own, more risk-tolerant values on the decisionmaking process, namely by suggesting that protective standards should be rebutted with less evidence; arguing that default standards, including precautionary ones that are consistent with a mandate, should be scientifically justified; implying that risk managers should approach these decisions as predominantly technical and science-based; and implying that rebuttal determinations should take place on a substance-specific basis. However, they do spotlight the scientific advantages of a more inclusive, weight-of-the-evidence approach to assessing risks before inserting these critical value choices. Id.
240. See, e.g., SHAPIRO & GLICKSMAN, supra note 89, at 70-71 (discussing the raised evidentiary burden imposed on agencies by courts in some cases and the resulting paralysis of agency decisionmaking).
241. Indus. Union Dep't v. Am. Petroleum Inst., 448 U.S. 607 (1980); see, e.g., Gail Charnley & E. Donald Elliott, Risk Versus Precaution: Environmental Law and Public Health Protection, 32 Envtl. L. Rep. (Envtl. L. Inst.) 10,363, 10,364 (Mar. 2002) (observing that while the "United States has had a long history of applying the precautionary principle in regulation" it "has moved gradually away from doing so as we learn more about risk assessment and its underlying scientific basis," and citing Benzene as one of the turning points).
242. For example, Justice Stevens noted with concern that the "Agency relied on the same policy view it had stated at the outset, namely, that, in the absence of clear evidence to the contrary, it must be assumed that no safe level exists for exposure to a carcinogen. The Agency also reached the entirely predictable conclusion that industry has not carried its concededly impossible burden of proving that a safe level of exposure exists for benzene." 448 U.S. at 635 n.39 (citations omitted). The Agency had in fact developed a presumptive standard of zero for workplace toxins and shifted the burden to industry to rebut this presumptive zero standard. See infra note 265.
243. 448 U.S. at 653 (requiring that "the Agency show, on the basis of substantial evidence, that it is at least more likely than not that long-term exposure to 10 ppm of benzene presents a significant risk of material health impairment").
244. See, e.g., Howard A. Latin, The Feasibility of Occupational Health Standards: An Essay on Legal Decisionmaking Under Uncertainty, 78 NW. U. L. REV. 583, 583 (1983) (arguing that courts lack an adequate conceptual framework for dealing with factual uncertainties).
245. Some of the subsequent cases that have similarly raised the burden of proof to justify a standard-beyond what might seem justified under the authorizing mandate-cite to the Benzene Case for their authority. See, e.g., Corrosion Proof Fittings v. EPA, 947 F.2d 1201, 1214 (5th Cir. 1991) (invalidating EPA's ban of asbestos under TSCA because (citing Benzene) the agency has the burden of proving banned products pose an unreasonable risk to the public, and EPA did not do a thorough enough assessment (with evidence)); Gulf S. Insulation v. United States Consumer Prod. Safety Comm'n, 701 F.2d 1137, 1143, 1146 (5th Cir. 1983) (invalidating a CPSC ban of formaldehyde foam insulation in part because the agency had the burden to show that the product is "more likely than not" to present a serious risk of cancer, but here the agency relied on a single study, and (citing Benzene) "it is not good science to rely on a single experiment. . . . To make precise estimates, precise data are required."); see also Abraham & Merrill, supra note 74, at 99 (describing the Gulf South opinion as taking aim at the factual basis of the Commission's decision, rejecting scientific limitations that were accepted in the scientific community, and "pick[ing] holes" in the data the Commission used to determine exposure).
246. This appeared to be one of the explicit goals of the regulatory reform legislation proposed in the 104th Congress. See supra note 204; Benenson, supra note 204, at 681-82 (quoting House Rules Committee Chairman Gerald Solomon as stating that "[f]or years, business and industry have been forced to jump through hoops to satisfy regulators in the bureaucracy. Well, if this legislation [H.R. 9] becomes law, we are going to turn that around."). See generally Celia Campbell-Mohn & John Applegate, Federal Risk Legislation: Some Guidelines, 23 HARV. ENVTL. L. REV. 93, 105-07 & app. 2 (1999) (observing that recent bills proposing regulatory reform would raise the agencies' evidentiary burden in conducting risk assessments).
247. See, e.g., S. 981, 105th Cong. § 623(c)(3) (1998) (requiring agencies to make net cost-benefit determinations on final major regulations where possible); H.R. 9, 104th Cong. § 422(a)(2) (1995) (requiring as a prerequisite to "major" regulation that the agency demonstrate that the benefits of the regulation "justify, and . . . [are] reasonably related to the cost of implementing and complying with the regulation); see also H.R. 9 § 414(b)(1) (agency must include discussion of conflicts in existing studies and data); § 414(b)(2) (agency must present list of plausible assumptions, inferences, and models, and explain bases for policy judgments); § 415(1)(B) (agency must provide statement of reasonable range of scientific uncertainties); § 415(3) (agency must compare the risk in question to similar risks, such as skiing and driving a car); § 415(4) (agency must include analyses of substitute risks-risks of products that could be substituted for the risk at issue).
248. An argument can be made that the executive orders requiring regulatory analysis, especially President Reagan's Executive Order 12,291, raise agencies' scientific burden of proof indirectly by requiring them to prepare an extended cost-benefit analysis (which requires considerable scientific information) as a prerequisite to regulation. See Exec. Order No. 12,291, 3 C.F.R. 127 (1981) (superceded by Exec. Order No. 12,866 § 11, 3 C.F.R. 638, 649 (1993)); see also Atlantic States Legal Foundation, Inc. v. Eastman Kodak, 12 F.3d 353, 358 (2d Cir. 1993) (recounting EPA's CWA enforcement position, which allows the release (even in large quantities) of toxics not listed in a Clean Water Act permit).
249. The best examples of this executive decision to raise an agency's burden of proof occurred during the Reagan Administration. There are a number of documented examples of high-level officials attempting to block or halt protective regulations. In each case, the decision was portrayed as based in the principles of "good science," which, according to the Administration, necessitated "hard proof of damage to health" before toxic materials could be regulated. See JOHATHAN LASH ET AL., A SEASON OF SPOILS: THE REAGAN ADMINISTRATION'S ATTACK ON THE ENVIRONMENT 131 (1984); see also id. at 149 ("Scientists critical of the shift to Good Science under Reagan called it a 'covert' attempt to radically revise and soften regulations."); Howard Latin, Regulatory Failure, Administrative Incentives, and the New Clean Air Act, 21 ENVTL. L. 1647, 1662 & n.40 (1991) ("[T]here is abundant evidence that administrators [of EPA under Reagan] frequently chose to 'study' uncertain issues as a way to avoid resolving them."); Frederica Perera & Catherine Petito, Formaldehyde: A Question of Cancer Policy?, 216 SCIENCE 1285, 1290 (1982) ("[I]t appears that EPA [under Reagan] is informally revising its cancer policy to decrease reliance on animal studies-a step that could have the effect of substantially delaying or indeed barring altogether protective action on substances such as formaldehyde, pending the development of positive epidemiological data.").
250. See supra Parts III.C, IV.B.1.
251. See BREYER, supra note 203, at 11-20, 51; FRANK B. CROSS, ENVIRONMENTALLY INDUCED CANCER AND THE LAW: RISKS, REGULATION AND VICTIM COMPENSATION 144-46 (1989) (discussing the problems that result from over- and under-regulation); MENDELOFF, supra note 214, at 2-3.
252. Cost-benefit requirements could also be characterized as an attempt to raise an agency's burden of proof to justify regulation. See supra Part V.B.1. Because cost-benefit provides an endpoint for regulation, it can be also viewed as a means for determining when a protective presumption has been rebutted. Essentially, then, cost-benefit provides two overlapping ways to undercut protective regulation-first by raising the agency's evidentiary burden and second by rebutting the resulting protective standard without considering the limits of information or who is best suited to produce it.
253. See, e.g., David M. Driesen, The Societal Cost of Environmental Regulation: Beyond Administrative Cost-Benefit Analysis, 24 ECOLOGY L.Q. 545, 558 (1997) (citations omitted) ("[B]ecause environmental and public health benefits are notoriously difficult to quantify, an administrative agency will tend to undervalue them in a [cost-benefit analysis ("CBA")] process that requires quantification. 'Soft' variables tend to get lost in the equation.").
254. Rather than rebutting a protective standard with scientific evidence demonstrating that a substance is less hazardous than presumed, a quantitative (and even qualitative) assessment of the benefits and costs of a regulatory action is based on information available at the time, which is then generally monetized using a number of controversial assumptions and methodologies. See, e.g., LISA HEINZERLING & FRANK ACKERMAN, PRICING THE PRICELESS: COST-BENEFIT ANALYSIS OF ENVIRONMENTAL PROTECTION 2 (2002), available at ("Many benefits of public health and environmental protection have not been quantified and cannot easily be quantified . . . . Even when the data gaps are supposedly acknowledged, public discussion tends to focus on the misleading numeric values produced by cost-benefit analysis, while relevant but non-monetized factors are simply ignored."); SHAPIRO & GLICKSMAN, supra note 89, at 103 (observing how cost-benefit studies done by OMB drop out the non-quantified environmental benefits and noting how critics of regulation tend to ignore this fact); Lisa Heinzerling, Regulatory Costs of Mythic Proportions, 107 YALE L. J. 1981 (1998) (identifying the assumptions and methodological choices made in an OMB table that assesses the cost-effectiveness of various public health and safety regulations).
255. See, e.g., Thomas O. McGarity, A Cost-Benefit State, 50 ADMIN. L. REV. 7, 58 (1998) ("When information or values arise that cannot easily be factored into the benefit models, the modelers often simply ignore them . . . . [N]eglecting 'soft' considerations . . . does bias the analysis against regulatory intervention, because the cost side of the equation implicates fewer 'soft' considerations than the benefits side."). Not surprisingly, cost-benefit estimates are used to suggest that protective standards should be less stringent because of their costs to society. See, e.g., Driesen, supra note 253, at 604 ("CBA will tend to produce lower benefit valuations than those of consumers, overestimate costs, and cause agencies to make very few decisions in a world of serious environmental problems from a variety of sources.").
256. See, e.g., Gerald D. Stedge, Abt Associates, Inc., Arsenic in Drinking Water Rule Economic Analysis (prepared for EPA Office of Ground Water and Drinking Water, EPA 815-R-00-026) 1-4, at Exhibit 1-1 (Dec. 2000), available at (listing and ignoring in cost-benefit analysis a number of non-quantifiable potential adverse effects of arsenic, including skin cancer; kidney cancer, cancer of the nasal passages; liver cancer; prostate cancer; and cadio-vascular, pulmonary, immunological, neurological, endocrine, and reproductive effects). In its latest Draft Cost-Benefit Report, for example, OMB omitted qualitative costs and benefits from most of the cost-benefit tables. See, e.g., OMB Draft Cost-Benefit Report, supra note 37, at tbls. 5, 6, 14 (assigning total dollar figures to the benefits and costs of rules when in some rulemakings the agency explicitly indicated that it was only able to quantify some of the benefits and costs); id. at 15,308 tbl. 13 (listing the benefits of paperwork requirements as zero even though OIRA concedes in the text that "[a]t present, it is not feasible to estimate the value of annual societal benefits of the information the government collects from the public"). The only table in which OIRA provided an indication that not all costs and benefits had been quantified-Table 7-it did not list qualitative costs and benefits under the columns headed "costs" or "benefits," but under the "other information" column. Id. at tbl. 7.
257. Indeed, the less scientific information available to enable quantification of the risks posed by a substance, the more likely cost-benefit approaches will suggest standards that are not in fact cost-benefit-balanced because they omit many possible harms. Cost-benefit analysis thus might even discourage the production of scientific information by the regulated community since no information translates to less regulation.
258. See generally John S. Applegate, The Perils of Unreasonable Risk: Information Regulatory Policy and Toxic Substances Control, 91 COLUM. L. REV. 261, 269 (1991) (recounting how FIFRA and TSCA both target "unreasonable" adverse risks, and how the legislative history suggests that this requires balancing costs and benefits) (citations omitted); see also SHAPIRO & GLICKSMAN, supra note 89, ch. 3, at 2 (observing, based on survey of health and environmental mandates, that "Congress in choosing 'statutory standards' has almost universally rejected a cost-benefit test as the basis for setting the level of regulation").
259. See Toxic Substances Control Act § 6(a), 15 U.S.C. § 2605(a) (2000); infra note 269 (discussing the interpretation of § 6(a) in Corrosion Proof).
260. Some form of cost-benefit accounting has been adopted in resolving the maximum contaminant goals under the SDWA, 42 U.S.C. § 300g-1(b)(3)(C) (2000), FIFRA, 7 U.S.C. §§ 136a(c)(5)(D), 136d(b) (2000); and the Consumer Protection and Safety Act, 28 U.S.C. § 2058(f)(3)(E) (2000).
261. See, e.g., Exec. Order No. 12,291, 3 C.F.R. 127 (1981) (cost-benefit requirements issued by President Reagan); Exec. Order No. 12,866, 3 C.F.R. 638 (2003) (cost-benefit requirements issued by President Clinton).
262. Under President George W. Bush, the Office of Information and Regulatory Affairs ("OIRA") in OMB has developed several initiatives that endeavor to review agency activities based in large part on the results of cost-benefit accountings. See generally OMB, Draft Cost-Benefit Report, supra note 37, at 15,020 (describing OIRA's effort to shift from being a "reactive" to a "proactive" force "in suggesting regulatory priorities for agency consideration"). These initiatives include targeting existing rules that "should be rescinded or changed to increase net benefits by either reducing costs or increasing benefits," id. at 15,022, engaging in the same sort of activity with respect to "problematic" agency guidelines that have not complied with process requirements like cost-benefit accountings, id. at 15,034-35, and sending prompt letters to agencies when OIRA believes that an agency is not prioritizing a particular, "beneficial" regulatory activity as highly as it should, id. at 15,020.

The activities of OMB under President Reagan were arguably even more vigorous. For one (among many) accounts of the vigorous use of cost-benefit analysis by OMB under President Reagan to reorder agency priorities and halt protective regulations, see Erik D. Olson, The Quiet Shift of Power: Office of Management & Budget Supervision of Environmental Protection Agency Rule-making Under Executive Order 12,291, 4 VA. J. NAT. RESOURCES L. 1, 49-55 (1984).
263. The Supreme Court has banned the use of cost considerations for at least one protective mandate, although that ruling has not yet been extended to other protective statutes. See Whitman v. Am. Trucking Ass'ns, 531 U.S. 457, 471 (2001) (holding that "[t]he text of § 109(b) [of the Clean Air Act], interpreted in its statutory and historical context and with appreciation for its importance to the CAA as a whole, unambiguously bars cost considerations from the NAAQS-setting process, and thus ends the matter for us as well as the EPA").
264. See, e.g., McGarity, supra note 127, at 198 ("This evolution from generic to case-by-case policymaking has nearly always come in response to industry criticism of protective policies that the agency proposed to implement generically.") (citations omitted). See generally Joel Yellin, Science, Technology, and Administrative Government: Institutional Designs for Environmental Decisionmaking, 92 YALE L.J. 1300, 1328 (1983) (arguing that public participation is effectively precluded when administrators use "their discretion gradually to draw political decisions under the cloak of expertise").
265. The Benzene Case, 448 U.S. 607 (1980). Before being effectively overruled in the Benzene Case, OSHA had developed a proposed Generic Cancer Policy to guide its decisionmaking on workplace toxins. That policy placed the burden of proof on industry to rebut a presumption of a zero level of exposure to ensure safety. See Occupational Safety and Health Admin., Proposed Rule on the Identification, Classification, and Regulation of Toxic Substances Posing a Potential Carcinogenic Risk, 42 Fed. Reg. 54,148 (1977).
266. For a broader argument that judicial review discourages agencies from developing universal principles to guide their decisions and in some cases causes agencies to avoid rulemakings altogether, see JERRY L. MASHAW & DAVID L. HARFST, THE STRUGGLE FOR AUTO SAFETY 225 (1990) (observing that judicial second-guessing of agency policy choices played a significant role in causing NHTSA to abandon efforts to set systematic policy and to resort instead to ad hoc recalls of automobile defects, with "[t]he result of judicial requirements for comprehensive rationality [being] a general suppression of the use of rules"); McGarity, supra note 127, at 219 ("If EPA can expect a lawsuit every time it engages in macro-policymaking in a generic rule or policy statement, it may engage in transparent generic policymaking less frequently and move regulatory policymaking to lower levels within the agency."); Pierce, supra note 75, at 313 (arguing that the erratic judicial review of agency policy decisions might cause "rule-making as a vehicle for making [explicit] policy decisions [to] . . . soon be relegated to a chapter in a legal history book"); see also BREYER, supra note 203, at 58 (observing that after the Fifth Circuit overturned the CPSC's formaldehyde rule in Gulf South, "the Commission could achieve a comparable result simply by recalling formaldehyde-containing products, without benefit of an agency rule"). For a general discussion of a related phenomenon emerging from the judicial review of agency guidelines, see Peter L. Strauss, Publication Rules in the Rule-making Spectrum: Assuring Proper Respect for an Essential Element, 53 ADMIN. L. REV. 803, 850 (2001) (expressing concern that courts have developed rules that "discourage advice-giving or provoke agencies into accompanying their advice with warnings that it cannot be relied upon").
267. See generally JASANOFF, supra note 40, at 49 (observing that the effect of judicial review of agencies' scientific findings "appears in hindsight to have been founded on an overly simplified view of both science and policy").
268. 206 F.3d 1286 (D.C. Cir. 2000). Chlorine Chemistry substantially complicates EPA's effort to develop a coherent approach to establishing rebuttal criteria by implying that certain "best" evidence, standing alone and with no requisite "quantum" or weight-of-the-evidence approach, triggers a rebuttal. In Chlorine Chemistry, the D.C. Circuit reversed EPA's interim decision not to alter a zero MCLG for chlorine under SDWA. Id. The agency decided to delay revising the drinking water standard for chloroform-in spite of its own published recognition (based on an expert advisory panel consensus) that chlorine likely had a threshold level below which it would not cause cancer-because EPA had not completed its peer-review process, had not completed its own review of the science, and was operating under a protectionist regulatory program. Id. at 1288, 1290-91. The D.C. Circuit, without incorporating into its reasoning the undisputed fact that the agency's statutory mandate was a protective one, or deferring to the agency's interpretation of the quantum of evidence needed to rebut protective standards under the mandate, held that the agency was not using the "best available science" and reversed. Id. at 1290. The court held that the evidence was so compelling that it rebutted the zero standard. Id. But the court did not explain why the agency's insistence on completing its own scientific peer-review process was unreasonable given the protectionist mandate that requires the agency to err on the side of protecting public health. See, e.g., 42 U.S.C. § 300g-1(b)(4)(A) (2000) (requiring the agency to set MCLG standards at "the level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety"). Indeed, the court's refusal to acknowledge the protectionist mandate effectively reads protection out of the agency's standard setting for MCLGs. Cf. Weller & Graham, supra note 29, at 10,569 (arguing that "Daubert suggests an additional basis for holding [EPA's zero MCLG for chlorine] unlawful under the APA, Libas, or both. There does not appear to have been any 'reliable' evidence that supported the rule."). Because of its focus on the quality of the evidence (instead of the point at which that quality evidence is sufficiently persuasive to rebut a protective standard), the D.C. Circuit's opinion is likely to lead to an even more inconsistent, case-by-case approach to determination of the point at which a protective standard has been rebutted. Cf. Latin, supra note 244, at 603 n.43 (discussing how the Benzene Court failed to appreciate the relevant statutory interests, the particular factual situation, or the desirable result in the face of uncertainty).
269. The Fifth Circuit's opinion in Corrosion Proof Fittings v. EPA, 947 F.2d 1201 (5th Cir. 1991), provides a similar, but less obvious reversal of EPA's rebuttal determinations under TSCA. In Corrosion Proof, the court raised a number of evidentiary barriers to the agency's effort to ban asbestos under the Act. Id. These increased evidentiary requirements had the effect of collapsing the evidence the agency needed to justify a regulatory activity with the evidence the agency needed to rebut that activity. The court also insisted that the agency provide the equivalent of definitive quantitative estimates of the cost and benefit of a ban before taking precautionary action. See, e.g., id. at 1219 ("Unquantified benefits can, at times, permissibly tip the balance in close cases. They cannot, however, be used to effect a wholesale shift on the balance beam. Such a use makes mockery of the requirements of TSCA that the EPA weigh the cost of its actions before it chooses the least burdensome alternative."). EPA's effort to act on a combination of qualitative and quantitative information regarding the risks of asbestos and effectively shift the burden for the remaining rebuttal to the manufacturers (who presumably had superior information regarding additional risk information, the costs of the ban, and the availability of substitutes) was invalidated by the court. EPA had already dedicated considerable effort to the ban decision. See, e.g., JOHN S. APPLEGATE ET AL., THE REGULATION OF TOXIC SUBSTANCES AND HAZARDOUS WASTES 635 (2000) ("[EPA's] decision was preceded by a data collection rule . . . , ten years of data analysis, 22 days of public hearings, 13,000 pages of comments from more than 250 parties, and a 45,000 page record. (And all of this for a known human carcinogen!).").

Yet, even if injecting this added confusion into the agency's approach to rebutting regulatory activities under TSCA is legally accurate or at least prudent, it is flatly inconsistent with the appellate courts' interpretation of the rebuttal approach under the virtually identical mandate of FIFRA. Compare section 6(b) of FIFRA, 7 U.S.C. § 136d(b) (2000) (allowing the cancellation of a pesticide "[i]f it appears to the Administrator that [the] pesticide . . . when used in accordance with widespread and commonly recognized practice, generally causes unreasonable adverse effects on the environment"), with section 6(a) of TSCA, 15 U.S.C. § 2605(a) (2000) (allowing restrictions on the use and distribution of toxic substances "[i]f the Administrator finds that there is a reasonable basis to conclude that the manufacturer, . . . of a chemical substance or mixture . . . presents or will present an unreasonable risk of injury to health or the environment"). Yet, according to the Fifth Circuit, the agency must produce all of the evidence to support a ban under TSCA. Corrosion Proof, 947 F.2d at 1201. But according to the D.C. Circuit, FIFRA shifts the burden to the manufacturer to rebut a presumption of protection for a cancellation based on limited evidence. See Envl. Defense Fund, Inc. v. EPA, 548 F.2d 998, 1004-05 (D.C. Cir. 1977) ("Once risk is shown, the responsibility to demonstrate that the benefits outweigh the risks is upon the proponents of continued registration. Conversely, the statute places a 'heavy burden' of explanation on an Administrator who decides to permit the continued use of a chemical known to produce cancer in experimental animals."), cert. denied, 431 U.S. 925 (1977) .
270. See, e.g., GREENWOOD, supra note 8, at 140 (discussing how individual agency officials, as well as an entire agency office, might "prefer minimal external interference with its activities" and resent other organizations' participation in reviewing and clearing its proposed standards).
271. See, e.g., Kweit & Kweit, supra note 186, at 21 (arguing that "internal logic, quantification, and unfamiliarity [with the underlying analytic techniques] tend to create an aura of inviolability to policy proposals, thus helping bureaucrats to defend policy turf during periods of threat") (citations omitted).
272. But see infra note 287 and accompanying text (describing circumstances under which EPA has developed more coherent and expeditious approaches to making rebuttal determinations in instances where the need to make the decisions is likely to arise frequently and is a constant source of controversy, mostly from industry).
273. See, e.g., CLEAN SITES, supra note 232, at B-14 (discussing inconsistencies in cleanup decisions under CERCLA from site to site); see also U.S. Envtl. Protection Agency, Corrective Action for Solid Waste Management Units (SWMUs) at Hazardous Waste Management Facilities, 55 Fed. Reg. 30,798 (July 27, 1990) (stating that cleanup standards under RCRA will be determined on a site-by-site basis).
274. Some studies have suggested that the federal government is bringing environmental enforcement actions and making cleanup decisions under Superfund in a discriminatory manner. See, e.g., Robert D. Bullard, Environmental Justice for ALL, in UNEQUAL PROTECTION: ENVIRONMENTAL JUSTICE AND COMMUNITIES OF COLOR 7-11 (Robert D. Bullard ed., 1994); Maranne Lavelle & Marcia Coyle, Unequal Protection, the Racial Divide in Environmental Law, A Special Investigation, NAT'L L.J., Sept. 21, 1992, at S1, S2.
275. See McGarity, supra note 127, at 147-202 (discussing how ad hoc, individualistic staff assessments lack any coherent substantive principles for when the evidence is sufficient to rebut a protective standard and are portrayed as individualistic technical assessments devoid of policy considerations and identifying how many of these decisions work to "erode" the principles set forth in the authorizing mandate, in part by flipping the burden of producing evidence and proving the need for protection back to EPA).
276. The variation among state water quality standards for at least some toxins suggests very different approaches to determining the point at which a presumption of a hazard has been rebutted. See, e.g., Oliver A. Houck, TMDLs IV: The Final Frontier, 29 Envtl. L. Rep. (Envtl. L. Inst.) 10,469, 10,477-78 (Aug. 1999) (describing dozens of sources of variability between states' water quality standards and recounting the general concerns of commentators regarding the many "problems" inherent in ambient water quality criteria).
277. 175 F.3d 1027, 1034 (D.C. Cir. 1999). The court remanded the standards, in part because EPA lacked any "intelligible principle" for setting the standards at a particular level, a deficiency that the D.C. Circuit held violated the nondelegation doctrine. See id. (holding that EPA "lack[ed] any determinate criteria for drawing lines. It has failed to state intelligibly how much is too much."). The Supreme Court held that EPA did not need to have these "intelligible principles" as a condition for its ambient air standards, overruling the D.C. Circuit's invalidation of the standards based on the nondelegation doctrine. See Whitman v. Am. Trucking Ass'ns, 531 U.S. 457, 475 (2001) ("But even in sweeping regulatory schemes we have never demanded, as the Court of Appeals did here, that statutes provide a 'determinate criterion' for saying 'how much [of the regulated harm] is too much.'"). The Supreme Court did not disagree that EPA's standard-setting approach might lack coherent criteria, but it concluded that decision to proceed in a case-by-case fashion lies within the deference afforded the agency and does not violate the nondelegation doctrine. Id. at 914 ('"[A] certain degree of discretion, and thus of lawmaking, inheres in most executive or judicial action.' . . . Section 109(b)(1) of the CAA, which to repeat we interpret as requiring EPA to set air quality standards at the level that is 'requisite'-that is, not lower or higher than is necessary-to protect the public health with an adequate margin of safety, fits comfortably within the scope of discretion permitted by our precedent.") (citations omitted) .
278. See, e.g., GENERAL ACCOUNTING OFFICE, TOXIC SUBSTANCES: EPA'S CHEMICAL TESTING PROGRAM HAS MADE LITTLE PROGRESS 17-21 (1990) (reporting alarming delays in EPA's issuance of test rules requiring manufacturers to generate additional safety data on their products, and noting that as of 1989, "EPA had received complete test data for only six chemicals and had not finished assessing any of them for possible further action").
279. See generally John S. Applegate, The Precautionary Preference: An American Perspective, 6 HUM. ECOL. RISK ASSESS. 413, 437-38 (2000) (discussing the increased and varying use of formal and informal negotiation to set standards and reach agreement on protective standards and activities, some of which barter (implicitly) over the rebuttal decision in a specific case). At the same time, rebuttal principles are often lost or given up in this bartering approach to determining the appropriate level of a protective standard. See, e.g., William Funk, When Smoke Gets in Your Eyes: Regulatory Negotiation and the Public Interest-EPA's Woodstove Standards, 18 ENVTL. L. 55, 87-89 (1987) (observing based on a detailed case study that despite the implication that a negotiated technology-based standard for woodstoves was based on facts, the standard in fact was the result of barter and compromise and the reviewing public was "left not knowing" what role facts and compromise played in the final standard).
280. See McGarity, supra note 127.
281. There are a number of superb, in-depth studies of the public health and environmental agencies that generally provide detailed accounts of the agencies' implementation of individual protective mandates or regulatory programs. In fact, this existing research might be sufficient to derive a relatively rigorous sense of how agencies approach these rebuttal determinations (even though this was not the focus of the research), without the need for much additional research into agency decisionmaking. For some of the best in this genre, see, for example, JOHN D. GRAHAM ET AL., IN SEARCH OF SAFETY: CHEMICALS AND CANCER RISK (1988); GREENWOOD, supra note 8; JASANOFF, supra note 40; MARC K. LANDY ET AL., THE ENVIRONMENTAL PROTECTION AGENCY: ASKING THE WRONG QUESTIONS (1990); MASHAW & HARFST, supra note 266; THOMAS O. MCGARITY & SIDNEY SHAPIRO, WORKERS AT RISK: THE FAILED PROMISE OF THE OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION (1993); MELNICK, supra note 73; POWELL, supra note 15; Applegate, supra note 258; Richard A. Merrill, CPSC Regulation of Cancer Risks in Consumer Products: 1972-1981, 67 VA. L. REV. 1261 (1981).
282. Cf. SMITH, supra note 52, at 101 (observing in a case study on the Department of Energy that in the context of science advisory boards, the absence of a clear set of policy instructions for the scientists' deliberations proved fatal to the board's success).
283. In instances where statutes might be read similarly, agencies should also draw comparisons. See, e.g., supra note 269 (discussing how the courts have interpreted facially similar provisions under FIFRA and TSCA in opposite ways). Professors Shapiro and Glicksman provide a model for making these comparisons by drawing out the general types of evidentiary requirements need to justify and then rebut protective mandates. See SHAPIRO & GLICKSMAN, supra note 89, ch. 3. Their table, however, is based on an amalgamated interpretation of the statutory mandates that seems to adopt as the dominant interpretation of these mandates an appellate court pronouncement (rather than agency or an "outside" interpretation), to the extent one exists. What is still needed are the agencies' interpretations of these mandates. In fact, an agency might ultimately reject a district or even an appellate court's interpretation of its mandate and adopt a different interpretation in the course of implementing a statute.
284. See also McGarity, supra note 127, at 204-07 (discussing, in a qualified way, the virtues of "[g]eneric resolution of science/policy questions" rather than allowing the agency to proceed on a "case by case basis").
286. See, e.g., U.S. Envtl. Protection Agency, Addition of Certain Chemicals; Toxic Chemical Release Reporting; Community Right-to-Know, 59 Fed. Reg. 1788, 1791 (proposed Jan. 12, 1994) ("To ascertain whether there is sufficient or insufficient evidence to determine that the statutory criteria are met for listing a chemical, EPA conducts a hazard assessment on the chemical [in accordance with the Guidelines] and determines based on the weight-of-the-evidence, whether the chemical can reasonably be anticipated to cause any of the adverse effects specified in EPCRA section 313(d)(2).").
287. Although it appears too early to evaluate the workability of this approach, EPA has developed a computer program to assist in determining whether there is sufficient evidence to allow a hazardous waste to be de-listed under RCRA. EPA's computer program apparently combines information on the toxicity of individual constituents with exposure assessments and fate transport from surface impoundments or landfills. It has been used by a number of EPA regions and appears to be well received. See, e.g., David R. Case, RCRA: Solid and Hazardous Wastes Issues and Cases, SG032 ALI-ABA 455, 459-60 (2001); see also U.S. Envtl. Protection Agency, RCRA-Risk Assessment Program., at (last visited Sept. 11, 2003).
288. Cf. SUBCOMM. ON ARSENIC IN DRINKING WATER, NAT'L RESEARCH COUNCIL, ARSENIC IN DRINKING WATER 3 (1999) ("Additional epidemiological evaluations are needed to characterize the dose-response relationship for arsenic-associated cancer and noncancer end points, especially at low doses.").
289. For concrete suggestions on how this selection of balanced committee members might be accomplished, see JASANOFF, supra note 40, at 243-45; SMITH, supra note 52, at 197-200.
290. See supra notes 287-88 (suggesting how the development of a list of studies could partly reform of the rebuttal problem).
291. Under a statute like FQPA, for example, it is possible that generic rebuttal criteria will not provide enough guidance or oversight to agency officials screening the hundreds of pesticide residue tolerances. See, e.g., McGarity supra note 127, at 207-08 (discussing the difficulties with making generic decisions in pesticide tolerance determinations).
292. For example, the agency will identify the amount and types of research needed to rebut a protective standard. It could be "more than a reasonable doubt" (as the Clean Air Act's NAAQS provision and the Safe Drinking Water Act's provision on MCLGs imply), or simply an equal balancing of pros and cons (as TSCA and FIFRA might suggest). See supra note 231.
293. See supra note 70 and accompanying text.
294. See, e.g., JASANOFF, supra note 40, at 242 ("Delegating a sensitive issue to an advisory committee remains one of the most politically acceptable options for regulatory agencies, even when the underlying motive is to transfer a fundamentally political problem to the seemingly objective arena of science. FDA has frequently been criticized for using its committees in this manner."); POWELL, supra note 15, at 148 (observing that "[w]hether unconsciously or wittingly, policymakers often seek legitimacy or political cover for their judgments from science or scientists"); Wagner, supra note 7 (documenting this problem in the regulation of toxic substances).
295. If one of the factors affecting participation is the cost of information, then the more opaque and complex a regulatory decision, the more costly it is for interested parties to participate. Conversely, when the underlying value choices in a regulation are easy to understand and access, one would expect a higher incidence of public comment and challenge as long as the value choices are moderately controversial. See, e.g., supra Part IV.B.2.
296. See Administrative Procedure Act § 10(c), 5 U.S.C. § 704 (2000) (providing for judicial review of "final agency action").
297. "Not inconsistent" shifts the burden of proof to the challenger (and away from the government) to prove that the agency promulgated a rule that is inconsistent with its guidelines. Cf. Comprehensive Environmental Response, Compensation, and Liability Act, 42 U.S.C. § 9607(a)(4)(A) (2000) (allowing recovery of response costs incurred by the United States that are "not inconsistent with the national contingency plan").
298. Foreclosing the ability of challengers (or the courts) to conduct repeat litigation would provide agencies, perhaps for the first time, with incentives to develop generic criteria and policies to support protective decisionmaking. This reform would also reduce problems associated with regulatory delay if the courts follow legislative directions and afford agency policies deference. This is obviously aspirational and might not materialize. See, e.g., Pierce, supra note 75, at 301-02 (arguing that judges on the D.C. Circuit might be substituting their own interpretations of ambiguous statutes for agencies' and randomly reversing agency policymaking in rulemakings); Richard L. Revesz, Environmental Regulation, Ideology, and the D.C. Circuit, 83 VA. L. REV. 171, 171 (1997) (concluding that a judge's ideology does impact judicial decisionmaking, especially in reviewing agency rulemakings under procedural requirements).
299. Whether the benefits that accrue to regulated entities from establishing more realistic and predictable criteria for inclusion of risks and for the rebuttal of protective policies are sufficient to offset the costs associated with a likely increase in the number of substances that would be regulated and the greater involvement of a more attentive public is difficult to predict at this preliminary stage. See, e.g., supra Part IV.B.2.
300. Congress could also step in and mandate on its own the precise point at which a protective standard has been rebutted. Because of advancements in science over time, however, more detailed specification of rebuttal criteria by Congress might be too inflexible. Cf. Richard A. Merrill, Congress as Scientists, ENVTL. F., Jan.-Feb. 1994, at 20-24 (criticizing the Delaney Clause because Congress did not anticipate or provide the agency with discretion to adjust the mandate around critical scientific developments).

Less direct, but potentially more realistic, a constellation of stakeholders with differing interests might appreciate the benefits of clearer rebuttal criteria and exert pressure on the agencies or the White House for reform. Agencies already have the vague statutory direction and the needed interpretive authority; outside pressure on the agencies might be all that is needed to effect a complete reform. Industry and national environmental groups have united in the past to recommend legislative changes, although admittedly only in relatively unique circumstances. See, e.g., Rena I. Steinzor, The Reauthorization of Superfund: Can the Deal of the Century Be Saved?, 25 Envtl. L. Rep. (Envtl. L. Inst.) 10,016 (Jan. 1995) (describing how industry and environmental groups drafted consensus legislation to reauthorize CERCLA, legislation that was not ultimately passed into law). Industry will need to be convinced, and this might be impossible. Cost-benefit and perhaps even case-by-case approaches to rebutting protective standards might offer more benefits to industry than a clearer and more consistent approach.
301. See supra Part V.B.3.
302. See supra note 77.
303. See supra Part V.A.