icon- folder.gif   Conference Reports for NATAP  
 
  4th IAS (Intl AIDS Society) Conference on HIV Pathogenesis, Treatment and Prevention
Sydney, Australia
22-25 July 2007
Back grey_arrow_rt.gif
 
 
 
New Data on Gains From Starting Antiretrovirals Earlier
 
 
  Reported by Jules Levin
4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention
July 22-25, 2007
Sydney, Australia
 
Mark Mascolini
 
That perennial question--when to start antiretroviral therapy--got plenty of play at the 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention in Sydney. Indeed, the issue arose enough to inspire some grumbling that the early-start faction had loaded the program. Or could it just be that more researchers want to know if stronger, safer regimens deserve an earlier try in people infected with a virus whose depredations (pillaging attack) begin on the first day of infection and get worse with time?
 
This article reviews six Sydney studies that address the question of when to begin antiretroviral therapy.
 
Do conservative start guidelines need updating?
Seven years ago the Netherlands retooled its antiretroviral guidelines to advise starting therapy at a CD4 count between 200 and 350 rather than between 350 and 500. At the Sydney meeting Dutch investigators offered evidence suggesting that change may no longer make sense [1].
 
This analysis by Ard van Sighem (HIV Monitoring Foundation, Amsterdam) involved 4142 ATHENA cohort members diagnosed with HIV between 1998 and 2005. All were at least 16 years old, and all either remained untreated or began a potent antiretroviral regimen during that time. No one had AIDS when they joined the cohort, and none got infected by injecting drugs. In one part of the study the Dutch team focused on 1422 people (34.3%) with at least 4 years of follow-up and stratified by CD4 count: under 50, 50 to 200, 200 to 350, 350 to 500, and above 500. They figured progression to earliest AIDS diagnosis or death in 3111 people (75.1%) with more than 200 CD4s when diagnosed with HIV.
 
In the whole 4142-person group, 74% were men. While 52.7% got HIV during sex between men, 40.6% got infected heterosexually and the rest by other or unknown routes. Just over half of the cohort was born in the Netherlands, and 22.8% came from sub-Saharan Africa. While 6.3% entered ATHENA with fewer than 50 CD4 cells, 18.5% had 50 to 200, 23.0% had 200 to 350, 21.3% had 350 to 500, and 30.7% had more than 500. Viral load at diagnosis ranged from about 10,000 to 100,000 copies. Almost two thirds of the cohort (65.6%) started antiretrovirals at some point during follow-up.
 
During 1.3 to 4.7 years of follow-up, 81 people (1.9%) died and 255 (6.1%) got an AIDS diagnosis. CD4 counts at diagnosis proved higher in people diagnosed before 2000 than in those diagnosed later. For people with 200 to 350 CD4s, 350 to 500 CD4s, or more than 500 CD4s at diagnosis, counts rose significantly more slowly in those diagnosed after 2000--that is, after the guideline changes--than in those diagnosed before 2000 (Table 1).
 

Slpe-1.gif

In the group with at least 200 CD4 cells at HIV diagnosis, 45 died, 135 wound up AIDS, and 165 got AIDS or died. In an analysis statistically adjusted for age, entry CDC disease status, gender, and region of origin, every log unit higher viral load at cohort entry raised the progression risk 50% (95% confidence interval [CI] 1.2 to 1.8, P = 0.0001).
 
A CD4 count between 200 and 350 at HIV diagnosis compared with more than 500 also boosted progression risk by 50% (HR 1.5), but that correlation fell just short of statistical significance (95% CI 0.9 to 2.5, P = 0.09). Progression risk varied hardly at all between people who entered the cohort with 350 to 500 CD4s and those who entered with more than 500 CD4s. When van Sighem figured progression risk in people diagnosed before versus after 2000, diagnosis with 200 to 350 cells in or before 2000 halved progression risk (HR 0.5, 95% CI 0.3 to 1.0, P = 0.04).
 
NATAP asked van Sighem why the overall worse prognosis with a CD4 count between 200 and 350 turned in the other direction in people diagnosed before 2000. The year-2000 change in Dutch guidelines probably explains this finding, he suggested via e-mail. Because of the guideline change, people with 200 to 350 CD4s were more likely to start combination therapy if diagnosed before 2000 than after 2000.
 
The hazard ratio of 0.5 in people diagnosed before 2000, van Sighem amplified, "shows that there is some disadvantage in being diagnosed after 2000, and we suppose that this is due to the delayed start of combination antiretroviral therapy after 2000. We argue that as antiretroviral drugs are becoming less toxic, combination antiretroviral therapy might be started at higher CD4 cell counts than currently recommended."
 
How long you stay under 350 CD4s matters
A 9858-person CASCADE cohort study found that lower current or nadir (lowest-ever) CD4 count independently raised the risk of death from AIDS or from severe infection, liver disease, or cancer [2]. A longer time spent with a CD4 count under 350 also independently inflated the odds of death from AIDS, non-AIDS infections, or liver disease. And a higher viral load before starting antiretrovirals upped the risk of AIDS death or death from non-AIDS infections or liver disease.
 
The CASCADE Collaboration collates numbers from 23 seroconverter cohorts in Europe, Australia, and Canada (www.ctu.mrc.ac.uk/cascade). At the Sydney meeting the CASCADE group focused on 9858 adults monitored for a median of 8 years since they picked up HIV. In that time two thirds (66.2%) started a potent antiretroviral combination. Median age at HIV seroconversion stood at 30 years (interquartile range [IQR] 25.2 to 36.6), and just over three quarters of cohort members were men. More than half of those in this analysis (54.4%) got infected during sex between men, 26.3% during sex between men and women, 14.3% by injecting drugs, and 5.0% by other routes.
 
During follow-up 597 people died, but only 158 of them (26.5%) from AIDS. The most frequent known non-AIDS causes were non-AIDS infections in 50 people (8.4%), liver disease in 46 (7.7%), a non-AIDS cancer in 46 (7.7%), suicide in 38 (6.4%), and cardiovascular disease 36 (6.0%).
 
Neither suicide nor death from heart disease correlated with latest or nadir CD4 count. In an analysis adjusted for age, gender, mode of HIV transmission, hepatitis C virus (HCV) serostatus, first-time combination therapy, and viral load, the risk of dying from AIDS proved significantly higher with a lower CD4 count. The same held true for death from non-AIDS infections, liver disease, or non-AIDS cancers.
 
Confirming earlier findings, the CASCADE team found that a viral load above versus below 100,000 copies before starting antiretrovirals independently raised the risk of an AIDS death. A pretreatment load above 100,000 also jacked up the odds of dying from a non-AIDS infection, liver disease, or heart disease, but not from a non-AIDS cancer. This adjusted analysis showed that more time spent with fewer than 350 CD4s boosted the chance of dying from AIDS, liver disease, or a non-AIDS infection (Table 2).
 

RiskD-2.gif

What this kind of analysis can't sort out is how much time people spent with fewer than 350 CD4s while waiting to start antiretrovirals and how much time they spent under 350 after one or more regimens faltered. Still, CASCADE collaborators believe their results "plead for earlier initiation of antiretrovirals to reduce the impact of the most frequent specific causes of death." They call for further study of non-AIDS diseases that don't cause death to get a better handle on the natural history of these diseases in the current treatment era.
 
Lower pretreatment CD4s imperil immune restoration
Where your CD4 count stands when you start antiretrovirals determines how many T cells--and what kind--return during a successful course of therapy [3]. That conclusion emerged from analysis of 978 people enrolled in ACTG 384, the trial comparing efavirenz with nelfinavir coupled with AZT/3TC or ddI/d4T. ACTG investigators split this group in subsets who started therapy in different CD4 brackets--fewer than 50, 51 to 200, 201 to 350, 351 to 500, and more than 500--tracking changes over 144 weeks in absolute CD4 count, naive and memory CD4 cells, and CD8 cells.
 
Before treatment and 144 weeks later, the crucial naive-to-memory cell ratio was worse in each lower CD4 bracket (P < 0.01) (Table 3). Although median ratios rose in nearly every group, they remained significantly askew when comparing each lower bracket with the next higher bracket (P < 0.01), except for the comparison between the 351-to-500 CD4 group and the more-than-500 group (Table 3). In a comparison group without HIV infection, the median ratio was 0.87--higher than in all of these ACTG 384 groups after 144 weeks.
 
Similarly, CD4-to-CD8-cell ratios were significantly lower in ACTG 384 participants before antiretroviral therapy than in HIV-negative people, and they stayed low through 144 weeks of therapy (Table 3). These deficits were worse in people with low baseline CD4 counts. The CD4-to-CD8 ratio in people without HIV is about 2.0.
 

ratio-4.gif

As in earlier studies, total CD4 counts in people who started therapy with lower CD4 sums never "caught up" with those in people who started at higher counts--at least not in the 144 weeks of this study. ACTG 384 enrollees who began their antiretrovirals with more than 350 CD4s usually saw their total CD4 counts, naive and memory CD4 counts, and naive-to-memory CD4 ratio approach or return to the normal levels seen in people without HIV. People who started antiretrovirals with fewer than 350 CD4s generally did not approach normal quotients for any of these T-cell measures.
 
The ACTG team concluded that their findings "support consideration" of a CD4 threshold above 350 cells for starting therapy "to allow restoration of normal T-cell populations."
 
Starting with more than 350 CD4s halves resistance risk
US patients who began antiretrovirals with more than 350 CD4s ran half the risk that major resistance mutations would emerge if their regimen failed than people who started with fewer than 200 CD4s [4]. Those beginning treatment with fewer than 200 CD4 cells had nearly a five times higher risk of nucleoside or nonnucleoside mutations than people starting with more than 350 CD4s.
 
HIV Outpatient Study (HOPS) investigators reached those conclusions by analyzing 683 people who began therapy after January 1, 1999 (when genotyping became available) and reached a viral load below 1000 copies on their first regimen. Of these 683 people, treatment failed (rebounded above 1000 copies) in 243 (36%), 78 of whom had a resistance test after failure. Everyone had at least 90 days of antiretroviral experience.
 
Risk of virologic failure ran highest in people who began treatment with fewer than 200 CD4s (19.3%) than in those who began with 200 to 349 CD4s (17.8%) or more than 350 CD4s (15.7%). But these differences failed to reach statistical significance, perhaps because of the small number of people analyzed. Among the 78 people genotyped after failure, those who began with more than 350 cells maintained viral control (median 20.8 months) significantly longer than people who started with 200 to 349 cells (10.5 months) or fewer than 200 cells (7.7 months) (P = 0.026).
 
Resistance mutations arose more often in people who started antiretrovirals at lower CD4 counts, though this difference reached statistical significance only for nucleoside mutations (Table 4) (note from Jules: small numbers of patients might explain not being significant). Critics of early treatment often argue that starting sooner heightens the risk of resistant virus because adherence wanes over time. But the HOPS team found more resistance in people starting with fewer than 350 CD4s, even though those starting with more CD4s took antiretrovirals twice as long.
 

mutat-5.gif

Although their findings all lean in the same direction--favoring early antiretroviral therapy--the HOPS team appropriately cautions that results rest on a small patient sample. And cohort members were not evenly distributed over the three baseline CD4 brackets: 46 started therapy with a count under 200, 14 with 200 to 349 CD4s, and 18 with 350 or more CD4s. As Table 4 shows, when the researchers consider only people who took a nonnucleoside or a protease inhibitor, the sample size shrinks even more.
 
NATAP asked the presenting HOPS investigator Jonathan Uy (University of Illinois, Chicago) if he cared to speculate on why a higher baseline CD4 count favors lower resistance rates when a first regimen fails. "Speculation is all we have," he replied.
 
"If the association is real," Uy wrote by e-mail after the conference, "it could be a patient-centric reason (patients who start lower are less adherent and therefore get more resistance, but we did address [that possibility] to some degree by only looking at those who initially suppressed) or a biological reason (earlier treatment is better, though it would be only conjecture on my part on how this actually leads to more resistance)."
 
Lower nucleoside toxicity risk with earlier therapy start In a separate study HOPS researchers unearthed evidence that starting antiretrovirals with a higher CD4 tally lowers the risk of nucleoside-induced side effects [5]. The HOPS team also found that the risk of three major nucleoside toxicities faded if they did not emerge in the first year of therapy.
 
This analysis involved people troubled by three classic nucleoside side effects--1969 patients with peripheral neuropathy, 1398 with anemia, and 1152 with renal insufficiency. Again the HOPS collaborators grouped them by pretreatment CD4 count (0 to 49, 50 to 199, 200 to 349, 350 to 499, and 500 or more). Median follow-up stretched to 3.1 years in the neuropathy group, 4.3 years in the anemia group, and 4.5 years in the renal problem group.
 
Multivariate analysis determined that starting antiretrovirals at a CD4 count under 200 independently inflated the risk of peripheral neuropathy 1.54 times (P < 0.001), the risk of anemia 1.58 times (P = 0.030), and the risk of renal insufficiency 2.22 times (P < 0.001). Starting treatment with a CD4 count at or above 350 shaved the risk of neuropathy 12% and the risk of anemia 27%, but these improvements stopped well short of statistical significance (P = 0.437 and P = 0.165).
 
Of course nucleosides weren't the only culprits. Every extra 10 years of age upped the odds of neuropathy 39% (hazard ratio [HR] 1.39, P < 0.001), while d4T more than doubled the risk (HR 2.16, P < 0.001) and ddI raised the risk 34% (HR 1.34, P = 0.023). Men had a 47% lower risk of anemia than women (HR 0.53, P = 0.017), while AZT almost doubled the risk (HR 1.93, P = 0.017). Every 10 additional birthdays boosted the chance of renal insufficiency 50% (HR 1.50, P = 0.003), being male cut the risk 44% (HR 0.56, P = 0.025), and being white almost halved the risk (HR 0.54, P = 0.010). AZT therapy doubled the risk of kidney problems (HR 2.02, P = 0.005), but taking tenofovir had no impact.
 
Separate analysis of 895 people compared rates of neuropathy, anemia, and renal insufficiency in those who started antiretrovirals within a certain CD4 bracket with rates in people who delayed therapy until the next lower CD4 bracket. People who began treatment in each higher CD4 stratum almost always had lower side effect rates per 100 person-years than people who delayed. (The one exception was anemia among people who started with 200 to 349 CD4s versus fewer than 200 CD4s.) But none of these differences reached statistical significance.
 
These three side effects never developed in more than 80% of the HOPS cohort. When they did, the toxicities almost always appeared within the first 6 to 12 months of therapy. If a side effect did not crop up in the first year of treatment, the chance that it emerged later dropped with continued nucleoside use.
 
Higher AIDS and non-AIDS risk when starting under 250 CD4s Besides yielding profuse data on the AIDS and non-AIDS risks of interrupting antiretroviral therapy [6,7], the SMART trial also showed that immediate treatment of naive or off-treatment people slices the risk of opportunistic disease and non-AIDS illness when compared with deferred therapy [8]. These findings led SMART researchers to propose that morbidity and mortality rates are "probably higher" in early HIV infection than previously imagined and that a randomized trial should be mounted to confirm their findings.
 
SMART randomized 5472 treatment-naive and -experienced people to stay on antiretroviral therapy regardless of CD4 count or to suspend or stay off therapy with a count above 350 and resume with a count under 250 [6]. A substudy by Sean Emery (University of New South Wales, Sydney) looked only at 477 people who signed up for SMART with no antiretroviral experience or when not taking antiretrovirals--including 249 in the treatment-interruption group and 228 in the steady-therapy contingent [8]. Equivalent proportions were treatment naive (n = 249) and off therapy (n = 228). People randomized to uninterrupted therapy began antiretrovirals immediately, while those randomized to drug breaks deferred treatment until their CD4s slipped under 250.
 
During follow-up the deferred-treatment group spent 17.6% of the time on treatment, while the immediate-treatment group spent 89.5% of the time taking their antiretrovirals. Median CD4 count when study participants started therapy measured 243 in the deferred arm and 435 in the immediate arm.
 
After almost 36 months of follow-up, the deferred-treatment group (those in the drug-interruption arm) had significantly higher risks of (1) opportunistic disease and death, (2) fatal and nonfatal opportunistic disease, (3) a serious non-AIDS illness, or (4) a composite endpoint combining endpoints 2 and 3, at the following hazard ratios and 95% CIs:
 
1. Opportunistic disease and death: HR 4.38, 95% CI 1.45 to 13.2, P = 0.009
2. Fatal and nonfatal opportunistic disease: HR 4.40, 95% CI 1.23 to 15.8, P = 0.02
3. Serious non-AIDS illness: HR 7.05, 95% CI 1.58 to 31.5, P = 0.01
4. Endpoints 2 and 3 combined: HR 5.08, 95% CI = 1.91 to 13.5, P = 0.001
 
Risk of a new opportunistic disease or serious non-AIDS disease (the composite endpoint) rose with lower CD4 counts just before the diagnosis, and this risk always proved substantially higher in the deferred-treatment arm (Table 5). Also, people who deferred therapy spent much more time with fewer than 250 CD4s and with 250 to 349 CD4s than people in the immediately treated group (Table 5). People in the immediate group, on the other hand, spent lots more time with a count above 500.
 

non-6.gif

Emery and colleagues figured that immediate versus deferred antiretroviral therapy in this group trimmed the risk of opportunistic disease or a serious non-AIDS diagnosis by 5.7%. The higher risk in the deferred-treatment group, they concluded, "appears to be determined, at least in part, by time spent with lower CD4 cell counts."
 
What's the next step?
SMART investigators believe their findings on treating HIV infection earlier (see preceding section) must be confirmed in a randomized trial, and they say so in the title of their study [8]. But this assertion may reflect some (author) selection bias, since the SMART crew consists largely of researchers who planned a prodigious randomized trial to get to the bottom of treatment-break imbroglio. More than one HIV maven scoffed at SMART from the start because they thought earlier work made plain the dangers of drug holidays. Some of those same skeptics think the growing mountain of cohort data supporting earlier antiretroviral therapy makes a long, difficult, and costly trial unnecessary.
 
Everyone knows the inherent weaknesses of cohort data and after-the-fact analyses like the just-described SMART inquest [8]. But there's no doubt that earlier treatment looks smarter and smarter as these findings pile up. The SMART team itself cited several reasons in the rationale for this latest analysis:
 
- Risk of AIDS persists at CD4 counts above 500 [9].
- People taking antiretrovirals run a lower risk of AIDS at any CD4 count [10].
- Risk of AIDS drops as soon as antiretrovirals start [11].
- People with higher CD4 counts have a slimmer risk of serious heart, liver, and kidney disease [6,7,12-14].
 
Mark Mascolini writes about HIV infection (markmascolini@earthlink.net).
 
References
1. van Sighem AI, Gras L, Smit C, et al. A CD4 threshold below 350 cells/mm3 for starting HAART is associated with a higher risk of disease progression. 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract WEPEB016.
2. Marin B, Thiebaut R, Rondeau V, et al. Association between CD4 and HIV RNA with non AIDS-related causes of death in the era of combination antiretroviral therapy (cART) (AIDS and certain non-AIDS mortality higher with lower CD4s at start). 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract WEPEB019.
3. Robbins G, Chan E, Spritzler A, et al. Effect of baseline CD4 cell count on immune reconstitution during combination antiretroviral therapy in ACTG 384. 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract WEPEB080.
4. Uy J, Armon C, Buchacz J, Brooks J. Initiation of HAART at CD4 cell counts 350 cells/mm3 is associated with a lower prevalence of antiretroviral resistance mutations at virologic failure. 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract WEPEB017.
5. Lichtenstein K, Armon C, Moorman A, et al. Initiation of antiretroviral therapy at higher CD4+ T cell counts reduces incidence of nucleoside analogue toxicities acutely and risk for later development with continued use of these agents in the HIV outpatient (HOPS) cohort. 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract MOPEB016.
6. The Strategies for Management of Antiretroviral Therapy (SMART) Study Group. CD4+ count-guided interruption of antiretroviral treatment. N Engl J Med. 2006;355:2283-2296.
7. Phillips A, Carr A, Neuhaus J, et al. Interruption of ART and risk of cardiovascular disease: findings from SMART. 14th Conference on Retroviruses and Opportunistic Infections. February 25-28, 2007. Los Angeles. Abstract 41.
8. Emery S, SMART Study Group and INSIGHT. Major clinical outcomes in patients not treated with antiretroviral therapy at baseline in SMART: a rationale for a trial to examine early treatment of HIV disease. 4th IAS Conference on HIV Pathogenesis, Treatment, and Prevention. July 22-25, 2007. Sydney. Abstract WEPEB018.
9. Phillips A, CASCADE Collaboration. Short-term risk of AIDS according to current CD4 cell count and viral load in antiretroviral drug-naive individuals and those treated in the monotherapy era. AIDS. 2004;18:51-58.
10. Podlekareva D, Mocroft A, Dragsted UB, et al. Factors associated with the development of opportunistic infections in HIV-1-infected adults with high CD4+ cell counts: a EuroSIDA study. J Infect Dis. 2006;194:633-641.
11. Egger M, May M, Chene G, et al. Prognosis of HIV-1-infected patients starting highly active antiretroviral therapy: a collaborative analysis of prospective studies. Lancet. 2002;360:119-129. Erratum in: Lancet 2002;360:1178.
12. Lau B, Gange SJ, Moore RD. Risk of non-AIDS-related mortality may exceed risk of AIDS-related mortality among individuals enrolling into care with CD4+ counts greater than 200 cells/mm3. J Acquir Immune Defic Syndr. 2007;44:179-187.
13. Baker J, Peng G, Rapkin J, et al. HIV-related immune suppression after ART predicts risk of non-opportunistic diseases: results from the FIRST study. 14th Conference on Retroviruses and Opportunistic Infections. February 25-28, 2007. Los Angeles. Abstract 37.
14. Smit C, Geskus R, Walker S, et al. Effective therapy has altered the spectrum of cause-specific mortality following HIV seroconversion. AIDS. 2006;20:741-749.