Growing Pains: OCM Results Need Close Look

,
Oncology Live®, Vol. 19/No. 15, Volume 19, Issue 15

Darcie Hurteau, MBA, and Alyssa Dahl, PhD, MPH, discuss the growing pains of the Oncology Care Model, a complicated payment program that is still establishing administrative processes.

Alyssa Dahl, MPH

Earlier this year, the Centers for Medicare & Medicaid Services (CMS) released the first round of reconciliation data from the Oncology Care Model (OCM) bundled payment program. As we at DataGen parsed through the information, we found that some of the practices that did well were surprised to learn they had achieved savings, and some participants were pleased to find out that their level of performance met expectations.

The unexpected nature of these results can be chalked up to the growing pains of a complicated payment program that is still establishing administrative processes. After realizing they hadn’t shared the identical set of claims information they used for calculating practice performance results, CMS had to release claims data a second time. CMS wasn’t aware of the problem until DataGen (and probably other organizations, too) alerted them about missing claim lines and identified specific instances where we couldn’t re-create the expenditures CMS reported for episodes of care. Because this was the first reconciliation, CMS may not have realized what data would be required for participants to understand the reconciliation and trust the results.

As we checked the reconciliation calculations, we found instances where CMS had failed to provide all methodological information required to replicate results. Missing were criteria for cancer type attribution and rules about claim inclusion and exclusion for identifying characteristics used to set target prices for care. Typically, CMS’ other bundled payment programs provide explicit specifications about how episodes are constructed—the type of detail that has been absent for the OCM.

Distinct Performance Patterns

Despite this lack of clarity, several interesting pieces of information emerged from this initial reconciliation. For instance, this was the first time that participants saw the application of the novel therapy adjustment, which helped increase payments for eligible practices with greater adoption of newly FDA—approved oncology drugs. There is some confusion and a need for further assessment here, because some practices received a smaller payment adjustment than expected, and other practices didn’t receive an adjustment at all.Additionally, we observed some distinct performance patterns. Although it was very difficult for practices to achieve savings on breast cancer episodes, many practices had the opposite experience with intestinal cancer episodes. Discussions with oncologists familiar with practice patterns for intestinal cancer suggested that the most common treatment regimens use drugs of similar costs, which helps the model produce more predictable target prices for intestinal cancer episodes. There was significant variation in spending patterns in breast cancer, and the target price model may require further breast cancer—specific risk adjustment to handle predictors of episode spending outside the provider’s control. For example, when an oncologist places a patient on a clinically indicated high-cost drug regimen with no comparable cost-efficient alternative, the model lacks the sensitivity to reflect its impact on episode cost within the target price.

Measuring Impact

The surprising results have motivated practices to dig deeply into the results of the first reconciliation and prepare for future ones. During performance period 1, most participants were not focused on improving financial performance—most of their efforts centered on collecting data and implementing care transformation requirements. Additionally, a lack of clarity on patient attribution (for purposes of physician payment) during the first performance period made it challenging to monitor aggregate performance.As with any new initiative, monitoring of ongoing performance periods has become more manageable with experience. Participants anticipate that they will soon be able to measure the impact of their efforts spent on care transformation, using the next round of reconciliation data.

Practices that didn’t achieve savings during the first performance period will need to consider taking on downside risk. However, this may be too high a hill to climb for some practices, and it will be interesting to see who among that group of participants remains in the program and who drops out. The need to make that decision won’t emerge until the fourth reconciliation in mid-2019, but forwardthinking participants should start developing their risk-mitigation strategies now.

Although the first reconciliation was surprising and sometimes frustrating, the outlook for success for OCM participants remains good. When performance period 2 results are released, practices will need to critically analyze these data, taking care standards and implementation activities into account.