Out of the Woodwork

My friend Janiece seems to attract the whackos. This time it is the alternative medicine crowd glomming on to an old post what is it with these people? Neither they nor Wagner can stand having a piece of criticism out on the Net, even an old one. Do they spend all day vanity Googling? I had completely forgotten about Janiece’s post until the crazies showed up again months later.

One of the crazies showed up with “data” from the Gerson Institute, and being the truth seeker that she is, Janiece responded:

I’m not a doctor, but I do understand the scientific method, and this is not a clinical trial or a well constructed study. What I will concede is that the information was interesting enough to me as a layman that I think further study by qualified professionals wouldn’t be uncalled for.

Janiece is quite kind in her willingness to be open minded. This is not a character flaw*, because she also wanted to test the hypothesis provided this is precisely what internalizing and living the scientific method as an heir of the Enlightenment and citizen of the modern world entails. But then, Janiece is my friend for many reasons, and this is one of them.

I do have a little bit of experience with clinical trial design, however, although (let me be very clear, here) I am not an MD. There are, however, methodological flaws in the study that negate even the glimmer of interest that Janiece detected ones that do not require a statistician or an MD to find, though I will concede that the layman will need some specialized bits of information to parse the full impact on the claims made by the alt-med whackos.

There are so many red flags for quackery in that article it is hard to know where to begin.

The first problem is with the study design itself, which was a retrospective analysis with historical controls. The authors claim that:

The genesis of this inquiry occurred during a landmark study by the U.S. Congressional Office of Technology Assessment [Ref 2] to which one of us (G.H.) was an advisor. In its report, OTA put forward a protocol for best-case reviews based on the premise that, no matter how many patients failed, as few as 10 or 12 cases with objective evidence of tumor response would be enough to propel an investigation by the National Cancer Institute (NCI).

The Ns (number of people in the statistical groups of the study) in that Gerson paper certainly seem to meet this test, but is that really what the OTA meant? Well, no.

Fortunately for us Netizens, Quackwatch has the whole report (and it is a report, not a “landmark study”)on its website:

The basic elements of each case in a best case review would be: 1) documented diagnosis by an appropriate licensed professional, including pathology reports and microscope slides of the tumor; 2) history of prior treatments; 3) length of time between the most recent treatment and the treatment under evaluation; 4) x-ray studies from before and after the treatment under evaluation was administered; and 5) a statement from the physician and the patient saying that no other treatments were administered at the same time as the particular treatment under evaluation.

All of those elements are all missing from the Gerson paper. There is no information at all in the Gerson paper about any other treatments their patients tried before or after initiation of their beand of nutrition therapy. Without this information, the entire article is garbage.

The real scientists who authored the paper understood that their words were going to be twisted:

No doubt this report will be used selectively by individuals wishing to portray various points of view, in support of or in opposition to particular treatments. The reason this is possible is that, almost uniformly, the treatments have not been evaluated using methods appropriate for actually determining whether they are effective. Regrettably, there is no guidance for new patients wanting to know whether these treatments are likely to help them.

The actual design of the study that Janiece was pointed to was particularly singled out by the 1990 OTA report as a problematic design:

For the most part, evidence put forward by individuals identified strongly with particular treatments has been of a type not acceptable to the mainstream medical community, usually because the evidence cannot support the conclusions drawn. A common format is a series of individual case histories, described in narrative. The endpoints are more often than not “longer than expected” survival times, sometimes with claims of tumor regression. In mainstream research, case reports of unexpected outcomes have been useful and do have a place, but they almost never can provide definite evidence of a treatment’s effectiveness.

Why exactly, is this study design problematic?

Except in rare circumstances, because of the heterogeneity of cancer patients’ clinical courses, it is virtually impossible to predict what would have happened to a particular patient if he or she had had no treatment or a different treatment. Groups of patients who have chosen to take a particular treatment cannot be compared retrospectively with other groups of patients, even those with similar disease, to determine the effects of the treatment. The factors that set apart patients who take unconventional treatments from other cancer patients may be related to prognosis (these may be both physical and psychological factors), and the means do not exist currently to confidently “adjust” for these factors in analyses. Examples of retrospective evaluations that have turned out to be wrong are well documented (see, e.g., (146)) as are problems with attempting to evaluate the efficacy of treatment from registries of cancer patients (145), though the problems are not necessarily widely appreciated.

In Chapter 12 of the OTA report, the point is elaborated on at length:

It is tempting to use the records of patients already taking unconventional treatments to try to derive some type of “response rate” or “survival rate” that could be compared with a “standard” rate, thus providing a quantitative estimate of the comparative “efficacy” of a particular treatment. While this approach has some intuitive appeal, it fails because there are no “standard” rates with which to make the comparison. The reason for this is that there is tremendous heterogeneity among cancer patients, even among those who have nominally the same type of cancer. While for most cancers it is possible to identify several important variables, “prognostic factors” (e.g., age, sex, stage of cancer), that are predictive of the likelihood of survival for a group of patients, the heterogeneity reaches beyond easily identifiable factors.

Even more so than the particular patients who are treated at a given hospital, patients who opt for unconventional treatment are strongly self-selected, and as a group, may have very different characteristics from those of the total cancer patient population, some of which may be related to prognosis.

In other words, we know that mental state can affect outcomes, both becuase it increases resolve and perhaps innate cancer-fighting ability, and because really sick people get beaten down. People with enough fight in them to actively seek alternative therapies are probably different from the average pool of patients. As the authors of the OTA report put it in a previous paragraph:

Those of us who have worked over the years with cancer patients have come to respect the vagaries of human biology wherein there are cancer patients who for unclear reasons fare better than we would have expected.

What we have here, is failure to communicate. Proper medical studies have what are known as inclusion and exclusion criteria to ensure that the control and active groups are as closely matched as possible. A retrospective study against a historical control is not able to match those criteria in the reported study to the historical studies it is being compared against.

I surfed on over to clinicaltrials.gov and searched “Oncology” until I found the first drug trial that popped up.

It was this one.

Let’s look at the inclusion / exclusion criteria for that trial:

Eligibility

Ages Eligible for Study: 18 Years and older
Genders Eligible for Study: Both
Accepts Healthy Volunteers: No
Criteria

Inclusion Criteria:

Ӣ Histologic or cytologic diagnosis of breast cancer with evidence of metastatic disease. NOTE: Patients with Her-2 positive (3+ by IHC or gene amplification by FISH) are eligible only if they have had prior trastuzumab therapy.
Ӣ Must have measurable or non-measurable lesions as defined by the Response Evaluation Criteria in Solid Tumors (RECIST).
Ӣ Two or fewer prior chemotherapy regimens in any disease setting. NOTE: All adjuvant and neoadjuvant chemotherapy will be considered one regimen. NOTE: Prior hormonal therapy for metastatic disease is allowed.

NOTE: Prior radiation therapy is allowed as long as the irradiated area is not the only source of evaluable disease.
Ӣ Age > 18 years at the time of consent.
Ӣ Written informed consent and HIPAA authorization for release of personal health information.
Ӣ Females of childbearing potential and males must be willing to use an effective method of contraception (hormonal or barrier method of birth control; abstinence) from the time consent is signed until 8 weeks after treatment discontinuation.
Ӣ Females of childbearing potential must have a negative pregnancy test within 7 days prior to being registered for protocol therapy.
Ӣ Ability to comply with study and/or follow-up procedures.

Exclusion Criteria:

Ӣ No prior therapy with bevacizumab, sorafenib or any other known VEGF inhibitors.
Ӣ No known hypersensitivity to any component of the study drugs.
Ӣ No other forms of cancer therapy including radiation, chemotherapy and hormonal therapy within 21 days prior to being registered for protocol therapy.
Ӣ No history or radiologic evidence of CNS metastases including previously treated, resected, or asymptomatic brain lesions or leptominigeal involvement. A head CT or MRI must be obtained within 28 days prior to being registered for protocol therapy.
Ӣ No other participation in another clinical drug study within 28 days prior to being registered for protocol therapy.
Ӣ No known human immunodeficiency virus (HIV) infection or chronic Hepatitis B or C
Ӣ No major surgical procedure within 28 days prior to being registered for protocol therapy or anticipation of need for major surgical procedure during the course of the study. Placement of a vascular access device and breast biopsy will not be considered major surgery.
Ӣ No minor surgical procedure within 7 days prior to being registered for protocol therapy.
Ӣ No known history of cerebrovascular disease including TIA, stroke or subarachnoid hemorrhage.
Ӣ No known history of ischemic bowel.
Ӣ No known history of deep venous thrombosis or pulmonary embolism.
Ӣ No history of hypertensive crisis or hypertensive encephalopathy.
Ӣ No non-healing wound or fracture.
Ӣ No active infection requiring parenteral antibiotics.
”¢ No other hemorrhage/bleeding event ≥ CTCAE grade 3 within 28 days prior to being registered for protocol therapy.

That’s quite a list. That, my friends, is what real science looks like in black and white. Proper studies show entry criteria.

If, for all the reasons I just noted, a retrospective study design is so problematic, and if the OTA recommended a best-case approach, why did the Gerson Institute abandon that technique?

Because we had proposed the original best-case review protocol to OTA, we were eager to construct a best-case review. However, we found OTA’s (and later NCI’s) protocol to have a serious shortcoming when used retrospectively: its focus on only tumor regression. Adequate documentation of tumor regression is unlikely to be collected in most alternative medical practices.

We abandoned the best-case review for the more informative retrospective review. In contrast to the best-case review, the retrospective review describes all patients, including non-responders, giving a more adequate impression of the outcomes of treatment.

Emphasis mine. Because that non-responder language hands the knowledgeable person an industrial sized-clue bat with which to whack that study.

More informative? Not according to the OTA report Hildenbrand was touting when it served his purposes. I think now the average layman can figure out why this dog of a study was published in Alternative Therapies in Health and Medicine, and not in a serious Oncology journal.

But first, a few more questions are begging to be answered. If Gar Hildenbrand actually was the one who proposed the best-case review protocol to the OTA panel, why did he even propose it, given the arguments presented here? And even if those arguments are valid, if they were indeed “eager” to test the protocol on their own methods, the Gerson Institute is located in a Mexican hospital. Why didn’t they go ahead and conduct the best-case review? Are you telling me that they can’t measure tumor progression? Or that they don’t do so as a matter of course? That they don’t measure disease activity as well as survival (another glaring omission in the paper the disease-free survival statistics)? If so, they’ve got some serious Hippocratic issues with locating a cancer clinic in that setting.

The OTA already addressed the issue of missing data, however:

Clearly, many patients who benefit from cancer treatment —mainstream or unconventional — could not be included in a best case review, because their records would not be sufficient to meet these demands. However, an adequate and convincing review could be based on as few as 10 or 20 successful cases. If a treatment is even moderately successful and has been used for many years, that number meeting the criteria should be available.

So I come to the conclusion that what tumor progression data they have is not very favorable. Why? Well, first of all, they could not come up with the requisite 10 ceases with adequate documentation, or they would have not resorted to the song-and-dance routine with the retrospective analysis.

In point of fact, the Gerson Institute actually committed to a best case review, as documented in the 1990 OTA report:

The Gerson Institute, major unconventional clinics treating U.S. patients in Tijuana, has embarked on such a best case review, however. Results have not been reported, but it could prove to be the first successfully-completed study of its type mounted by an unconventional treatment proponent.

Where is that study? It is not on the Gerson Institute website.

Unfortunately for the Gerson Institute, the MD Anderson Cancer Center has a careful review of their claims.

That best case review was published. In German. In the German journal Current Nutritional Medicine. Hiding much, guys? Why yes:

Lechner P, Kronberger J. Erfahrungen mit dem einsatz der diat-therapie in der chirurgischen onkologie. (Experience with the use of dietary therapy in surgical oncology) Akt.Ernahr-Med. 1990;15:72-78.
Purpose: Survival and disease response

Type of Study: Prospective cohort with matched controls

Methods & Results: Two studies were reported in this article:

Study #1: Patients who had carcinoma of the colon with liver metastasis (n=36) were selected from the General Surgery Department of the authors’ clinic in Austria. Patients were selected for the study if a matched control could be found. Controls were matched on age, sex, localization and stage of tumor. (Duration of diet not stated).

Results: In the diet group the mean survival was 28.6 months. For the control groups it was 16.2 months. (Statistical significance not reported.)

Study #2: Breast cancer. (n= 38) Patients were selected from the General Surgery Department of the authors’ clinic in Austria. Patients were selected for the study if a matched control could be found. Controls were matched on age, sex, localization of tumor, receptor status, menopausal status and type of adjuvant treatment (chemotherapy or radiation). (Duration of diet not stated).

Results: No significant differences were seen in terms of metastases and rates of survival between the two groups.

So, in the first study they do exactly what the OTA told them not to do:

This type of study cannot, except possibly in exceptional cases, provide definite proof of efficacy in terms of life extension, nor any estimate of rate of response to the treatment.

The primary endpoint was supposed to be a case-by-case analysis of tumor regression:

The objective of the best case review is to produce evidence of tumor shrinkage (or, in particular cancers, other accepted objective measures of lessening disease) in a group of selected patients (either current or former), with evidence documenting that the patients had the particular unconventional treatment under study and, as far as possible, that they did not have any other treatments during that time period.

This is not to mention that the statistics were not provided (one long-lived individual landing by random in the active group could skew the mean while the rest of the data indicate no difference between the treatment arms).

The second study in that paper was a flat-out failure.

No wonder this paper does not wind up on the Gerson Institute’s website, while the methodologically incorrect (not flawed, but actually wrong-headed analysis according to the OTA document cited by Hildenbrand) study with the misleading historical comparisons is included.

Finally, well, not finally, but I’m done digging through this particular piece of excrement, we have the issue of non-completers. In point of fact, this is the industrial-sized clue-bat I mentioned above.

The FDA requires companies promoting products with explicit health claims to provide a statistical treatment for drop-outs in their clinical studies. Many volunteer subjects drop out of active arms due to inefficacy. If one were to ignore them and only look at completers, one would get a very favorably skewed view of the efficacy of a treatment. The general methodology used to account for non-completers is called “Last Observation Carried Forward” and as noted in the link, it is seriously biased. Biased against the treatment being studied, in general, because people who drop out often don’t give the treatment a full chance to take effect, given that they are having side effect issues early on in the trial, so they are counted as non-responders**.

In general, clinical trial practice is to bias the design against the treatment in question, and if anything survives that, it meets the Hippocratic criteria for putting something new into a patient’s body.

One area where LOCF does not fulfill the function of raising a high hurdle for treatment effect is in survival studies, because patients lost to follow-up may have died. At the last observation in a survival study, most drop-outs were still alive, unless the treatment is actively and aggressively killing people.

The Gerson study uses 5-year survival as its primary endpoint.

Back to the horse’s mouth:

Over 15 years, from 1975 through July of 1990, 249 patients presented for treatment of melanoma. 53 (21%) are lost to follow-up.

Removing these patients from the failure statistics greatly biases any study in favor of the treatment being studied. Real scientists, real healthcare companies are (rightly) held to a higher standard.

The conservative approach is to treat these patients as dead within 5 years.

The Gerson Institute could obviate these objections by conducting a prospective double-blind clinical trial. They’ve been conducting trials since at least 1987 (assuming the trial published in 1990 took over two years to complete). MD Anderson’s review of the medical literature found:

A total of seven human studies have been identified in the literature as of January 31, 2006. Two were matched control studies15, one was a prospective cohort study16, two were retrospective reviews with historical controls17,18, one was a best case series19 and one was a set of case reports20.

15. Lechner P, Kronberger J. Erfahrungen mit dem einsatz der diat-therapie in der chirurgischen onkologie. Akt.Ernahr-Med 1990;15:72-8.
16. Austin S, Dale EB, DeKadt S. Long term follow-up of cancer patients using Contreras, Hoxsey and Gerson therapies. Journal of Naturopathic Medicine 1994;5(1):74-6.
17. Hildenbrand G, Hildenbrand L. Five year survival rates of melanoma patients treated by diet therapy after the manner of gerson: A retrospective review. Alternative Therapies 1995 Sep;Vol 1(4).
18. Hildenbrand G, Hildenbrand L. Defining the role of diet therapy in complementary cancer management: prevention of recurrence vs. regression of disease. Proceedings of the 1996 Alternative Therapies . Symposium: Creating Integrated Healthcare. January 18-21, 1996 Sandiego, CA.
19. Gerson M. Effects of combined dietary regimens on patients with malignant tumors. Exp Med Surg 1949;7:299-317.
20. Gerson M. Dietary considerations in malignant neoplastic disease. Rev Gastroent 1945;12:419-25.

Note that none of Hildenbrand’s studies have been published in even second tier journals, and two studies in that list date from the 1940s. Now, if you want to base your medical treatment on a study that was state of the art in 1949, go right ahead. As evidenced in the OTA report I (and Hildenbrand) cited, the medical community has bent over backwards to allow a back door into clinical testing for alternative therapies, and the best they can come up with is the crap posted on the Gerson website, which doesn’t even include their best (but still not up to standards) study.

As for me? I conclude that the mainstream medical community isn’t ignoring these studies because of a bias against alternative therapy.

The mainstream medical community is ignoring those studies because they are scientifically useless.

* Having a totally open mind is a character flaw, however, due to the tremendous amount of garbage one’s fellow human beings are willing to toss into the void.

** When proponents of alternative therapies point at low response rates in clinical trials, they forget (or deliberately obfuscate) this treatment of non-responders.

Cross Posted at Refugees From the City with slightly more profanity.

6 thoughts on “Out of the Woodwork”

  1. It’s interesting to read those comments. Nothing convinces those folks. As a surgeon for 40 years, I have seen my share. I used to get referrals from a couple of local quacks. I considered them to be relatively honest, if misguided, and they were competent enough to see when their remedies were failing. They would send the patient to me, I would do what I could, which was often enough to cure the condition, and send them back. I found through experience that there is no point in arguing with these folks. Logic doesn’t do it. One case was a man, fairly well known in the community, who had been treating his cancer of the rectum with celery juice. He eventually sought conventional care but he was still convinced that his failure was in not drinking enough celery juice. Of course, his cancer was well beyond cure but I was able to palliate him for a while. There is nothing sadder than a quack doctor who has developed cancer while treating himself. I have known two of those. I suppose I should be more polite and call them alternative practitioners when speaking of the dead. One had been my intern 10 years before.

    I would add that melanoma is extremely treacherous for researchers as it is prone to inexplicable remissions that may last for years. Don Morton, chief of surgical oncology at UCLA, had to retract an entire body of research on immunotherapy about 20 years ago. I have a number of stories that still raise the hair on the back of my neck. Melanoma is almost malign in its behavior in a psychological sense. It will do things no one can understand. I have a lot of melanoma stories.

  2. Michael Kennedy – I would love to hear those stories. I am a dermatopathologist, and you can imagine I have stories, too. One of my chief frustrations in reading the literature are the confident assertions made based on follow-up of 2-5 years.

  3. In fact, I was joking with one of the residents the other day that I should do a study where I take the main three or four clinical journals in my area and survey the average time for follow-up. “That would make me extremely popular,” I said. (It’s probably already been done, and I’m not sure I want to know the results……)

  4. The amusing thing to me is that both you and Janiece employ the caveat “I am not an MD” in calling for a scientific approach to the question.

    I am a real scientist who has tried to teach aspiring MDs and nurses what we call “baby physics.” The “baby” refers to science taught without requiring knowledge of calculus, which of course means that it is a pedestrian and terminal course for the medical school aspirant who can’t risk taking a real science course for fear of not getting an A, straight-As being practically a pre-requisite for admission to medical school and to the favored oligopoly.

    Consequently, I NEVER assume that a given medical practitioner is capable of doing science; there are, of course, some few physicists, chemists and mathematicians who do practice “medical science.”

    Having done medical editing now for several years, I can’t get over the fact that most medical practitioners cannot even speak or write proper English. For example, you will read articles in papers and on the web written by native-English-speaking MDs who say, more often than not, something like “a person is at risk for …” some disease, complication, etc.

    Idiomatic English requires, of course, “a person is at risk of …” the disease.

    I don’t know about others out there, but I still have not reconciled myself to being overcharged to undergo a butt-hole exam by a practitioner who can’t speak proper English. If we dispensed with licensing in the medical profession, as Milton Friedman had long advocated, a person would have the means to find a real scientist as his physician, just as he can find the best computer. As it is, there is no way a medical consumer can discover the truth about any disease or treatment without practically becoming a scientist AND physician himself.

  5. I found the whole blog interesting until I got the the next page older than the linked entry, and the post titled “Racism and the Tea-Baggers…er…the Tea Party Patriots”

    It contained the line “To me, if you are mentally incapable of recognizing your bigotry for what it is, either because you lack the insight into your own psyche or you can’t control your emotional knee jerk reactions to those who are different, then why the fuck should I give a good goddamn about anything you say?”

    Given the context of the article, I couldn’t work out if the intent was ironic or not.

Comments are closed.