The lies data tell: Before passing judgment, validate the metrics

Editor’s Note: Real Talk is a recurring Cardiovascular Business feature, with stories reflecting uncomfortable, look-the-other-way realities. It is told from an anonymous perspective to encourage honesty and objectivity, without sugarcoating. If you have a story, experience or lesson to share, email kbdavid@cardiovascularbusiness.com.

Quality reporting has changed the stakes in all aspects of healthcare. We’ve come to rely on metrics to guide decision-making across the clinical and business enterprises because the data never lie. Or do they?

Metrics have become an important form of currency in healthcare, especially in data-rich specialties like cardiology and vascular surgery. And, now, as third-party payments are increasingly tied to evaluating performance based on these metrics, everyone pays attention.  

True story

Our most recent mortality report arrived. Our numbers had plummeted. We were scoring well over the 1.0 expected mortality index. I wasn’t totally surprised. The buzz around the section had prepared me. Still, a sudden drop on a metric as important to our center’s identity as mortality knocks you back.

A full chart audit confirmed the source: Dr. C, a newly hired and highly skilled risk-taker who saw a large volume of patients and was willing to operate on many who had no other options—even on patients his own colleagues had deemed inoperable. His strong will and fortitude made him an effective patient advocate but also created intense metric “liability” for the group. Subtract his procedures and the group’s mortality would be the lowest in not just the region but the state.  

Our medical director and I scheduled a meeting with Dr. C to review the data. Dr. C pushed the reports across the table. “It doesn’t tell the whole story,” he announced. “I’m not killing patients; I’m giving them a chance. Without me, all of these people were headed straight into the ground. Twice as many are walking around because I was willing to take a risk. No one died who wasn’t already on the fast train to that destination.”

He stood up, pushed back his chair and delivered a parting shot. “You are worried about numbers. I’m worried about taking care of patients.” He walked out of the office, closing the door with slightly more force than necessary.

My medical director and I looked at each other, not sure what to do next. The director was in the crosshairs of the health system leadership for the outcomes but still thought Dr. C had a valid point.

I was stuck. Was the medical director sympathetic because early in his career he’d been branded a risk-taker but was eventually validated and rewarded?

How could the data say one thing even as two well-respected (and well-published) experts disagreed?  

If these patients had little expectation for survival, why were we in this predicament? Should I believe the data? How could it be totally wrong?

Armed with the stats, many of Dr. C’s physician colleagues leveraged our weekly team conference for their agenda. Dr. C’s cases were continually the focus because the team was tuned in to mitigating patient risk with a variety of strategies, including pressuring him to avoid complex cases and suggesting the care coordinators should direct risky cases to other physicians. They were trying to “protect him from himself,” they reasoned.

When Dr. C insisted that his patients were sicker so, of course, more of them were going to die, several colleagues rebuffed him. “Everyone with outcomes issues says that,” they argued, “but data don’t lie.” 

Digging deeper

The questions plagued me even as Dr. C’s colleagues were labeling him an outlier. The questions also helped me realize how far out of my expertise the clinical elements of this issue were.

I joined forces with a partner who, it turned out, had the key to unlocking the mystery.

Nurse A was a seasoned cardiothoracic ICU nurse who had worked bedside for many years and was now the unit manager. He’d seen the difference Dr. C made with his patients and believed Dr. C was getting a bad rap.

Dr. C’s mortality index of greater than 1.0 was a direct result of his observed deaths (O) exceeding his expected ones (E). “How can his O/E be so off if his patients are dying?” I asked Nurse A, who offered to review the charts of the deceased for patterns with nursing staff issues or other care areas. Our medical director enthusiastically supported our efforts, even admitting he hoped there was a nursing, anesthesiology or respiratory therapy issue.  

Nurse A soon had a breakthrough. It came together when he saw the third chart, having found nothing unusual in the first two. The third was a patient he’d helped manage.  

Patient 3’s stay had been long. There was excellent documentation—screen after screen of detail, actually—but a key element was missing. There was no comprehensive list of comorbidities present on admission. The overwhelming volume of the data about the patient’s care had masked the absence of this information.  

When the quality team abstracts data, they report only what is in a chart; they can’t review other visits or encounters to supplement the story. Nurse A, who remembered Patient 3, immediately recognized that the patient’s actual pre-intervention condition was much worse than the documentation suggested. He knew what the chart abstracters couldn’t have known or reported.

Compounding the problem was that Dr. C had not been the admitting physician. Patient 3 had been in the hospital five days before she was referred and transferred to Dr. C’s care, and he had relied on others to supply accurate and complete documentation.

Dr. C didn’t fully understand the metrics or how documentation was affecting his mortality ratings. He accepted a high mortality index as a natural and obvious consequence of treating his very sick patient population. No one had ever explained to him that if the documentation indicates death is the expected outcome, it makes the actual death “OK” from a quality standpoint. Even after we demonstrated how the metric was risk-adjusted, he insisted the data interpretation must be too simplistic to reflect just how sick his patients were.      

“Shouldn’t it be obvious how sick he was based on his clinical condition?” Dr. C demanded. “His initial labs alone should tell you this patient was likely going to die!”

On the one hand, Dr. C didn’t grasp why we seemed fixated on the chart’s documentation. But, on the other hand, our data gurus didn’t understand the disease etiology enough to know these missing elements were not entirely missing—rather they just weren’t properly or fully represented.  

It helped when we walked Dr. C through two of his cases with similar treatment, care and outcomes—both resulting in death. Both patients had advanced heart failure and a cadre of relevant comorbidities. In the first case, these were clearly documented along with disease progression. In the second, Dr. C and Nurse A understood that death was expected based on lab results, vent management details, the electrocardiogram and other data, but the disease progression wasn’t properly documented to indicate the expected outcome.

Dr. C understood the problem better now, but the impasse might have continued without Nurse A’s astute intervention. Nurse A stepped in, ensuring that charts included the data needed by the abstracters. He also was instrumental as we trained clinical documentation specialists to identify similar challenges in other high-risk, highly innovative areas—and Dr. C became their champion! These specialists taught our physicians how to adequately document conditions and comorbidities in complex patient encounters, and why it was important for them to do so.  

Debrief

Dr. C’s mortality rating soon dropped to under 1.0. Despite this accomplishment and our desire to retain Dr. C, he decided to move on about a year later. Family reasons were compelling him to move to another area of the country, he said, but the challenges of the previous year must have been a precipitating factor. Although the metric dilemma had been resolved from a reporting standpoint, the rumors still were whispered in the corridors.  

By blindly trusting that the data were accurate, the practice had failed Dr. C, at least initially, and our institution lost a talented physician who is now driving the standard of care in his disease specialty but in another health system.  

Quality reviews are vital, necessary and invaluable for identifying problems so we can address them. But they are not foolproof. Both clinical and administrative leaders have a responsibility to validate data and examine them in context before taking action. Doing otherwise could stifle the innovative, risk-taking physicians who are moving healthcare forward for the benefit of our practices and patients.

""
Kathy Boyd David, Editor, Cardiovascular Business

Kathy joined TriMed in 2015 as the editor of Cardiovascular Business magazine. She has nearly two decades of experience in publishing and public relations, concentrating in cardiovascular care. Before TriMed, Kathy was a senior director at the Society for Cardiovascular Angiography and Interventions (SCAI). She holds a BA in journalism. She lives in Pennsylvania with her husband and two children.

Around the web

Eleven medical societies have signed on to a consensus statement aimed at standardizing imaging for suspected cardiovascular infections.

Kate Hanneman, MD, explains why many vendors and hospitals want to lower radiology's impact on the environment. "Taking steps to reduce the carbon footprint in healthcare isn’t just an opportunity," she said. "It’s also a responsibility."

Philips introduced a new CT system at ECR aimed at the rapidly growing cardiac CT market, incorporating numerous AI features to optimize workflow and image quality.