2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2024 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
It is simply unrealistic and highly counterproductive to the future of cancer care to believe that the only acceptable approach to determining the absolute or relative clinical utility of a specific drug, regimen, device, or procedure, is through the conduct of a so-called evidence-based randomized trial.
Maurie Markman, MD
Any objective observer would agree with the general concept that there is a critical need in the oncology arena for robust, efficient, scientifically valid, and low-cost approaches to directly compare the relative clinical effectiveness of an increasingly large variety of therapeutic, diagnostic, screening, supportive/palliative care, and even cancer prevention strategies.
The urgency of this endeavor is amplified by the stunningly rapid introduction of novel and very expensive approaches in multiple arenas within cancer medicine, the aging of the population with its recognized heightened risk for the development of malignant disease, and the fact that cancer in many settings is becoming a manageable but truly chronic disease process. This latter factor substantially heightens the concern for the cost implications of any new effective therapeutic modality that patients may receive systemically or take orally for many years rather than only a few months as the cancer is controlled and the individual is able to maintain an acceptable quality of life.
Yet, there has been preciously little movement in the development of rational approaches to fulfill this essential research paradigm. One most unfortunate reason for this increasingly unacceptable state of affairs is the continued and often quite rigid insistence by many academic purists that the only legitimate method by which to compare “Strategy A” with “Strategy B” or “Strategy C” is through the initiation, conduct, and ultimate reporting in the peer-reviewed literature of a well-designed phase III randomized trial.
Although this specific clinical research methodology has clearly been absolutely critical in the development of the modern oncology therapeutic armamentarium, it is simply unrealistic and highly counterproductive to the future of cancer care to believe that the only acceptable approach to determining the absolute or relative clinical utility of a specific drug, regimen, device, or procedure, is through the conduct of a so-called evidence-based randomized trial.A number of points can be cited in support of this conclusion.
First, the time, effort, and cost of phase III trials designed to directly compare two or more strategies make this approach reasonable in only the most unique of circumstances. In the current antineoplastic drug development regulatory arena, pharmaceutical and biotech companies have little choice but to conduct such studies. It is not difficult to ascribe a considerable portion of the rapidly escalating costs of such agents to this remarkably inefficient and costly paradigm.
Second, even if the conduct of a phase III randomized trial were feasible in a specific clinical setting, how long would patients, their families, third-party payers, and society have to wait to learn the results of the study? How many such strategies relevant in a given clinical setting could be tested over a rationally acceptable period of time?
And perhaps most importantly, would the results of such a study even be relevant when they finally became publicly available considering the increasingly rapid introduction of novel drugs, devices, and procedures into clinical practice? Determining that “Strategy A” is superior to “Strategy B” in “Condition C” more than 10 years after the phase III trial was initiated is likely of no clinical value if 3 to 5 years after the comparative study was started, a new approach entered the clinical arena with objective data suggesting—or proving—superiority to one or both of the earlier tactics.
Third, who would fund these studies, and how would it be determined which such comparisons are of sufficient scientific or societal priority to justify such funding? Why would patients agree to participate, assuming there were data available suggesting the superiority of one approach compared with another, or at least the belief among a group of specialized clinicians that such superiority existed? And again, noting the tension increasingly evident between the academic purists who require evidence-based trials and individual patients seeking to receive optimal care, how would it be decided which particular academics would receive the credit for the output of this research effort in the form of, for example, grant funding or academic advancement?
Finally, a major limitation of all phase III randomized trials is particularly evident in studies designed to explore the clinical utility of particular strategies within large patient populations, the presumed ultimate goal of comparative effectiveness research. In an effort to appropriately ensure that the measured outcome (eg, survival, progression-free survival, symptom-free survival) reflects the unique contribution of the approaches being tested in the trial, it is essential to minimize clinically relevant heterogeneity within the study population.
As a result, patients with common but serious comorbidities may be excluded or, if included, the study size may be substantially expanded to be certain that any favorable or unfavorable impact of the modalities being tested are able to be observed.1 In addition, older patients, the largest group of individuals experiencing cancer in our society, are well recognized to be terribly underrepresented in such evidence-based clinical trials.2
Ultimately, it is essential that the academic clinical research community accept the objective reality that less definitive, more observational comparative studies are needed to move cancer care forward. Simply concluding, as did a recent commentary in a high-impact medical journal, that “almost every article resulting from comparative effectiveness study using observational data, however well-done, must be circumspect in asserting causal inference” is, quite frankly, most unhelpful.3
It is time for the academic medical community, in general, and the academic oncology community, in particular, to begin to focus their efforts on the development of rational and objectively meaningful approaches to compare the relative effectiveness of both new and older approaches in the management of common or uncommon oncologic conditions.4,5
Maurie Markman, MD, editor-in-chief, is president of Medicine & Science at Cancer Treatment Centers of America, and clinical professor of Medicine, Drexel University College of Medicine. maurie.markman@ctca-hope.com.