search ¤é¹ËÒÀÒÂã¹àÇçºä«µì
 
 
âÅâ¡é ¡¾Â. âÅâ¡é ÊÊÊ.
 
ÅÔ§¤ì
ºÅçÍ¡ ¡¾Â.

àÇçºä«µì KnowSteroid

Facebook â¦É³Ò

Facebook ÊàµÕÂÃÍ´ì
Facebook Twitter
Youtube ¡¾Â.



ʶԵÔ

»ÃѺ»Ãا : 7/03/2018
ʶԵԼÙéà¢éÒªÁ:6515795
¡ÒÃà»Ô´Ë¹éÒàÇçº:9358852
Online User Last 1 hour (0 users)


 
  Opening the FDA Black Box
  04 ¡ØÁÀҾѹ¸ì 2557
 
 


January 22/29, 2014
Link: http://jama.jamanetwork.com/article.aspx?articleid=1817770

Steven N. Goodman, MD, MHS, PhD; Rita F. Redberg, MD, MSc



The US Food and Drug Administration (FDA) is sometimes described as the most powerful regulatory agency in the world, its decisions affecting billions of both lives and dollars. The agency has this distinction not just because of its legislative mandate or governmental role but in part because of its reputation.1 This reputation has been shaped by how it makes its decisions, by its “conceptual power” in the language and methodologic standards it uses for drug and device approval, and by its ability to gain legitimacy among multiple audiences via a combination of scientific rigor and flexibility.1

The FDA’s influence on product availability both within and outside of the United States makes access to the inner workings of the FDA’s approval process enormously valuable. This issue of JAMA includes 3 articles examining the outcomes of that process, articles that will be carefully reviewed by professional FDA watchers and should be of interest to anyone involved in developing, administering, or using medical products. Each report has unique aspects, and together they cover much of the FDA landscape: the report by Sacks and colleagues2 covers unapproved drugs, that by Downing and colleagues3 covers approved drugs, and that by Rome and colleagues4 covers cardiovascular devices.

The most remarkable aspect of the report by Sacks et al is that it was written at all. Details about unapproved drugs are sparse; the FDA interprets the law as prohibiting the agency from sharing data for unapproved drugs or even from releasing the disapproval letter, the latter despite recommendations from its own transparency task force.5,6 Sacks et al have partially peeled back this curtain with memos and materials that in the past have been inaccessible to those outside of the FDA—materials reporting details that only the FDA can provide. According to this report, the FDA is looking for evidence of proper manufacturing, appropriate dosing, generalizable trial populations, adequate sample size, meaningful health outcomes and degree of influence on those outcomes, consistency among multiple end points and among different trials and sites, improvement over the standard of care, and evidence that benefits exceed harms. Whether these are applied for all drugs, how factors were balanced, and how egregiously they must be violated for the FDA to withhold approval is unclear.

One surprising feature of the list is how many of these elements can be ascertained before a phase 3 trial is conducted. Manufacturing, dosing, and design issues are known before the trial starts and could be discussed with the FDA in advance. Sacks et al do not explain whether that was done; if not, why not; or, if so, whether the sponsor did not follow the agreed-on design or whether the FDA changed its decision after the trial. For trials that did not show sufficient efficacy, it was not clear whether this was attributable to design weaknesses, such as single intervention groups or nonmasked assessments, or whether it was simply attributable to small effect sizes in otherwise exemplary studies.

Sacks et al make the observation that only 1 new molecular entity approved during the 12 years of their study was withdrawn for safety considerations. This does not include drugs voluntarily withdrawn at FDA request, those for which use was restricted by the FDA, or those with uncompleted safety studies. In 2009 alone, the FDA approved 181 major safety-related actions, including 25 black-box warnings and 19 contraindications.7 Many drugs remain on the market when required safety studies are not conducted.8 Moore and Furberg9 found that of 86 required postmarket studies for 20 drugs approved in 2008, only 26 studies (31%) had been fulfilled as of January 2013. Even when postmarket studies do show safety problems or ineffectiveness, it is difficult to change practice (eg, avastin for breast cancer10). With a median time of 11 years from approval to any safety action,7 the ultimate fate of the drugs approved since 2000 is not settled.

The report by Downing et al3 evaluating the approval process is almost orthogonal to the report by Sacks et al. Downing et al examine the “strength of the clinical trial evidence,” which the authors define as qualitative dimensions of design, not the trial results, nor the decision making process, nor the criteria cited by the FDA as the reasons for drug approval. Their focus is on the “pivotal trial,” which is subject to full FDA review and reanalysis with individual-patient data; in practice, however, the sponsor can often influence what is designated “pivotal,” and, as Downing et al note, the weight of nonpivotal trials in the decision process is unclear.

Those issues aside, the study by Downing et al required considerable effort to extract such comprehensive data from Drugs@FDA11 and raises a host of questions needing further exploration. Despite the FDA requirement for evidence from a minimum of 2 randomized clinical trials supporting an effect on health outcomes, 37% of product approvals were based on only 1 trial, 53% of cancer trials were nonrandomized, and an active comparator was used in only 27% of non–infectious disease trials. Surrogate end points were used in almost all approvals via the accelerated approval process and in 44% of nonaccelerated approvals. Trials were comparatively short, with most lasting less than 6 months, even those assessing chronic treatments for chronic diseases. Cancer drugs, perhaps predictably, were more often approved via the accelerated process and with weaker designs.

The study does not report how many randomized clinical trials used noninferiority designs, which pose special problems for inference and regulatory thresholds. Downing et al leave open the question of whether the FDA exercised reasonable judgment in accepting weaker designs both for regular and accelerated approvals. It would be helpful to know how many of these approvals, particularly those based on surrogate end points, had postmarketing requirements, how many postmarket studies were fulfilled, and what was the ultimate clinical assessment of the drugs approved in these various ways.

The studies by Sacks et al and by Downing et al examined FDA decisions through different lenses, with one focusing on the substrate, the other focusing on decisional criteria, and neither using similar ontologies of evidence nor assessing the full context or outcomes of decision making. These reports are useful complements to detailed FDA case studies or internal memos, which can be enormously informative about exactly how the FDA weighs evidence and makes decisions in context.12,13

In another article in this issue of JAMA, Rome et al4 examine FDA device regulation, evaluating a process that has received relatively little attention. There are 2 different pathways of devices to market. The most rigorous is the premarket approval (PMA) route, which requires some evidence of clinical effectiveness and safety data, although only 14% of high-risk devices have been assessed in even 1 randomized controlled trial, usually unblinded.14 As a result of the 2004 Medical Devices User Fee Act, which requires the FDA to require the “least burdensome route” to approval, less than 1% of medical devices are approved through this most rigorous pathway.15 For moderate- and low-risk devices, the other route is the 510(k) pathway, which allows devices to be marketed if they show “substantial equivalence” to existing devices.16 A 2010 Institute of Medicine committee strongly recommended elimination of this path.17,18

Rome et al describe an underexamined third way for a device to reach the market via the “supplement” process, used for modifications of devices originally approved through a PMA. Focusing on cardiac implantable electronic devices in the period from 1979-2012, Rome et al found that there were 5825 supplemental PMA applications for 77 original devices—a median of 50 supplements per device, of which about half were for design changes. It is not surprising that many of the devices are, according to the authors, “much different from the original.”

Rome et al report that supplemental PMA applications are commonly approved without clinical testing, based on reviewer judgments, suggesting that “in some cases, preclinical testing may be superior to clinical testing in assessing changes.” The main role of preclinical testing is to identify devices that demonstrate problems in the laboratory setting and thus avoid clinical testing of that device change. However, the absence of problems in the laboratory setting might not reliably predict the long-term fate of the device in the human body, where environmental and physiologic forces impossible to replicate in the laboratory setting work in combination. The malfunction of implantable cardioverter-defibrillator leads, which resulted in a widespread recall,19 and the hazards posed by particles shed from metal-on-metal hip replacements were not predictable based on engineering insights or in vitro studies. More empirical work is needed to assess the validity of reviewer judgments about whether clinical data are needed prior to certain types of device approval. Moreover, if the approval process depends on subsequent clinical trials, a less obvious consequence of constant design modification is that it could be difficult to know what device versions were used in the trials and whether results are generalizable to other versions of the device.

One important commonality of these 3 articles published in this issue of JAMA is that they were made possible by a degree of increased openness at the FDA.20 Drugs@FDA is becoming more usable and complete, FDA staff might not have published data about unapproved drugs in an earlier era, and the FDA is currently exploring ways to share individual-patient data from clinical trials21 and has cosponsored a recent Institute of Medicine committee to issue guidelines for such sharing.22 Although these reports represent important steps in improving understanding of FDA decision making, further commitment to and progress toward ensuring transparency, including reducing report redactions,23,24 is needed to help the scientific community and other interested parties answer the questions these studies raise, thereby helping the FDA in its mission to find the right balance between allowing innovation and protecting the public’s health.