Is evidence in the eye of the beholder? Robust data go a long way toward removing ambiguity but several recent reports show interpretive gray zones still exist, posing challenges for physicians and hospitals.
A study published in the May 15 issue of the Journal of the American Medical Association sent a clear message to electrophysiologists and others involved in the care of patients without a pacing indication who were implanted with implantable cardioverter-defibrillators (ICDs) for primary prevention of sudden cardiac death. Results from the randomized, controlled clinical trial found that patients implanted with dual-chamber ICDs had higher rates of complications at one year than did counterparts who received single-chamber devices.
The research team, led by Pamela N. Peterson, MD, MSPH, of the Denver Health Medical Center, didn’t sugarcoat its conclusion. “Despite the absence of compelling evidence to support these more costly devices, which are also associated with higher complication rates, current practice is highly variable,” they wrote. “Our study does not provide evidence that would support the more costly and more morbid device for patients receiving an ICD for primary prevention.”
Nor did a panel of experts hold back in a review of data on limiting daily sodium intake to 1,500 mg. They had been assembled by the Institute of Medicine to assess current evidence on sodium intake and outcomes. The panel described evidence on associations between sodium intake below 2,300 mg and cardiovascular benefits or risks in the general population to be “insufficient and inconsistent” and took aim at methodologies and data collecting.
They emphasized that excessive sodium intake posed health risks, but found the evidence too murky to support recommendations for limits below 2,300 mg. The American Heart Association, which stands behind the 1,500 mg ceiling, countered that it disagrees with the report’s key conclusions and said the panel’s assessment went beyond the scope of some of the studies.
In a reality check that nonetheless had a positive note, a study in Circulation that compared two methods found that physicians who visually interpreted the severity of coronary stenosis tended to estimate diameter stenosis higher than assessments via quantitative coronary angiography. But the 8.2 percent difference actually heralds an improvement, wrote editorialists.
Even in an evidence-driven profession, real-world practice includes many shades of gray. How does your practice proceed when data fail to give a clear signal? We’d love to hear from you.
Cardiovascular Business, editor