Question for you -
What is better, 10+ years free of CLL in exchange for 6 months of chemotherapy or 10 years of a pill taken daily? Go one step further: what if the chemo not only got rid of the disease for 10 years but actually cured some patients?
We are incredibly fortunate that there are new therapies approved by the FDA that are non-chemotherapy based - but we should be careful before abandoning something that can be extremely effective for some patients just because it is called "chemo."
Everyone desperately wants to find a cure for CLL so we need to be vigilant and avoid excessive optimism. The idea of "curable CLL" is debatable among the researchers who study the disease. For the purposes of this post, I need to be very clear that the difference between very long term remission and cure can become a little blurry. At what point is a patient with no signs of leukemia considered "cured?"
I am always careful to define "remission" when I am in the clinic. To me, it means, "we can't see the cancer but we know that it is there." That is very different from a cure, where one would assume it isn't. The only thing that really distinguishes between the two is the test of time. How much time needs to pass before you say it is no longer a remission but indeed a cure. In CLL we have never really talked about cure before, so I guess the answer is "a long time."
When carefully identified patients are treated with FCR chemo-immunotherapy, a decent fraction of them may not have any evidence of their CLL for over a decade. Two studies have now shown statistical outcomes to suggest that some of these patients may be cured (link here and here).
Frequent readers of my blog likely know that I have periodically taken a skeptical view of the FCR regimen. The multitude of new drugs such as ibrutinib and idelalisib have forced us to fundamentally re-think the best ways to treat the disease. After the widespread introduction of bendamustine and rituximab followed by the newer agents, the enthusiasm for FCR has been steadily diminishing. Database analysis would indicate that it is only utilized in the front line management of patients with CLL in about 20-35% of patients,
I think it is human nature to embrace things that are new and exciting - especially when that means moving away from chemotherapy. Yet as pendulums swing away from FCR 14 years after it was initially introduced, we may be ignoring some of the most impressive arguments in favor of the regimen that are only just now becoming evident.
The "chemoimmunotherapy regimens" such as FCR and Bendamustine-rituxan have constituted our treatment backbone for a number of years. Please see my prior blog post about choosing between the two. With the new non chemotherapeutic targeted drugs that are coming, there is likely to be quite a "turf war" over what regimens are right in which circumstances. While the new drugs are primarily approved in patients with relapsed disease, there will be considerable interest in moving them to the front line setting. Indeed, I've already had quite a few patients ask me if starting with one of the new drugs up front makes more sense than chemotherapy. I think that in some cases the answer may be yes, in other cases no. In many cases it is too early to tell.
So where is all this going?
CLL is biologically heterogeneous. Two patients who look very similar can have very different outcomes with treatment. Understanding that biologic heterogeneity is essential if you want to make the best choices on behalf of the patient. At the extremes, I think there are some patients where we need to make every effort to give effective chemoimmunotherapy and others where starting with a targeted agent makes more sense. Between those two extremes there is a lot of uncertainty. Over the next several blog posts, I hope to make that spectrum clear.
Let me start by coming back to the question that I began this post with: If you could get six months of chemotherapy and have 10 years free of disease and not require any treatment, would that be better than taking pills every day for ten years? What if I upped the ante and asked if that same six months of therapy cured a decent proportion of molecularly defined patients? Would chemotherapy be preferable to pills in that circumstance? What fraction of patients would need to be cured? Would 20% be enough? What if it was 60%? If 80% could be cured, would that make chemotherapy better than pills that are not thought to result in cure for anyone (yet)?
Let's start by defining "cure."
When we evaluate the performance of a new drug or a regimen, we plot the efficacy on a "Kaplan Meier" curve. On the "Y axis (up and down)" is a variable such as overall survival, or progression free survival. On the "x-axis (left to right)" is time. At time point zero the curve should be at 100% but then it keeps going down every time someone has an "event" such as disease progression or death. If a disease is really bad or a treatment really ineffective, the curve goes down very quickly toward the x-axis. If a disease is mild, or the treatment very effective, the curve stays very "flat" and doesn't drop from 100 much.
People who look at Kaplan Meier curves a lot get really excited when they see a "plateau." A plateau happens when you do some sort of treatment that is likely to cure a subset of patients. As time goes on, all those patients who are not cured either relapse or die until you are left with those patients who no longer have the disease and the events stop happening. When this happens, the curve may start at 100, slowly drop down to the percentage of cured patients (20%?-40%?) and then stays flat - or reaches a "plateau." If a plateau persists with updates of the data, researchers start to ask if those patients who are no longer relapsing are cured of their disease - particularly if you test them with MRD testing and they remain negative for CLL. Of course if you follow all patients long enough, it will always go down to zero as patients die of other causes, but few studies follow patients that long and a prolonged plateau is suggestive of something very important when considering a treatment.
At ASH 2012, the MDAnderson group put out an abstract entitled, "Is CLL still incurable?" This provocative question was asked in response to an apparent plateau in their long term follow up of the original 300 patient sample treated with FCR. After following their original group of patients treated back in 2000-2003 for thirteen years, a group of them still appear to have no evidence of any active CLL. That is a very impressive result for a therapy that only lasts six months.
Many researchers however regard data from MDAnderson with a degree of skepticism. It says a lot about a patient if they get on a plane and fly down to Houston for an opinion. It says even more if they do that every month for six months to get treated. Such patients necessarily have a degree of affluence, fitness, and education that makes them different from the average CLL patient. Multiple different studies have shown that such variables strongly influence outcome. It ends up being a biased sample set. Indeed, the average age of patients in the study was 57 years old while the average age of a patient requiring treatment for CLL in the United States is typically 74 years. That is a massive difference.
I have to admit, I was somewhat dismissive of the 2012 abstract on that basis - until the Germans gave an update of the CLL8 study which compared FC (fludarabine / Cytoxan without rituximab) versus FCR and evaluated outcome on the basis of molecular risk factors. They show that after six years of average follow up, several groups of patients start to achieve a plateau. Over the ensuing two years of follow up, if a patient has not already experienced a progression, very few such patients appear likely to do so.
Is this a cure? It is still probably too early to tell for sure. All Kaplan meyer curves are prone to becoming "unstable" the farther out in time that you go. Since fewer patients of the original cohort have been followed that long, single patient changes in status can have disproportionately larger effects on the curve than happens earlier in the follow up. Furthermore, bias influences become larger if there is a subset of patients with "better" follow up data. I am very interested though to see if this curve remains flat with subsequent follow up. It appears to mirror the single center MDA experience but in a multicenter population where the data is more reliable.
So who are these patients? It is interesting to know after the fact that some patients do very well, but it is far more helpful if we know before selecting a therapy if a cure is within reach. It would likely influence how you think about treating such a patient.
I previously wrote a post about the mutation status of the B-cell receptor, so called IgVH mutation analysis. The new update from the German CLL8 study did an impressive job looking at the multitude of new prognostic markers in CLL. They showed that patients with unmutated IgVH (bad), had a substantially higher rate of other negative prognostic markers (such as NOTCH, SF3B1, TP53 mutations) compared to those patients with mutated IgVH (better) to the tune of about 43% vs 24%. We also know from recent publications that newer technologies (next generation sequencing) can find a much higher frequency of adverse markers as it is a lot more sensitive to lower levels. These lower levels appear to be very important because they appear to confer similar overall prognosis. I wouldn't be surprised if "next-gen" could identify an even larger split between the IgVH mutated and unmutated groups.
Turns out that those patients with the mutated IgVH did MUCH better long term than those with unmutated IgVH. Indeed if the plateau in their data holds, it may occur in as many as 60% of patients with mutated IgVH whereas no clear plateau is seen in patients with unmutated IgVH.
Is 60% chance of long term disease control (maybe cure) good enough to take FCR? It is abundantly clear that not all patients are sufficiently "fit" to receive FCR and NCCN guidelines draw the line for full dose FCR at age 70. Are there other variables that you can look at to remove "bad actors" within this subset of better risk patients? If you focus on the "good risk" patients with IgVH mutated BCR and then exclude the patients with 17P or 11Q or bad molecular markers such as TP53, NOTCH1, SF3B1 mutations what is the long term disease control rate in that group - certainly a lot higher than 60% - probably closer to 80-90% chance of long term disease control - possible cure in this subset of patients.
These findings are very similar to those seen by the MD Anderson group. In their study just under 40% of patients had not experienced any progression at the 10 year point. If you look at the associated table however, it was strongly skewed in favor of those patients IgVH mutated BCR 49% vs 11%. They did not have access to FISH or molecular markers so that information is not available. I find it compelling though that the numbers were very similar between the two studies. It is also interesting to note that the change in the shape of the curve occurred right around the 6-7 year mark in both data sets. This implies that if you fit this highly favorable profile and make it that far out, your chance or progression over the next few years seems very unlikely.
The point I wish to make is this. The new drugs are very "sexy." It is very appealing to think of taking a non-chemo pill rather than chemotherapy, but if I am ever a patient with CLL and IgVH mutated BCR lacking 17P/11Q/TP53/NOTCH1/SF3B1 abnormality, I will absolutely take chemoimmunotherapy because there is a VERY GOOD chance I will not have to think about my CLL for many years, and based upon these two studies, I think he plateau in the survival curve is very provocative.
This blog post was originally intended to also talk about what I would do if I had 17P deletion, IgVH unumtated BCR, or other high risk markers but the post became too long and unwieldy. I will tackle those possibilities in upcoming posts.
Thanks for reading.