An interesting story of advanced analytics used not to fight fraud, but potentially to enable it, was highlighted when the Justice Department filed a civil complaint of fraud last week against both a health plan and its technology services provider.
At the core of this complaint is the accusation that the technology services provider used advanced medical records analysis to exploit a CMS payment mechanism that provides higher levels of payment for sicker patients. The medical records analysis supposedly identified patients with missed diagnoses which the health plan then used to submit additional claims. But the results that the provider churned up were anything but accurate, resulting in millions of dollars of additional submitted charges. CMS would have never known except that a whistleblower surfaced.
Was this a case of technology not living up to its promise? A health plan duped by the questionable claims of a service provider and willing to turn a blind eye in exchange for millions of additional revenue? Something else? That is an answer best left to the courts. But this suit could have a chilling effect on the part of healthcare providers seeking valid approaches to be paid appropriately without the high costs of mass review of medical records. Will the use of other advanced machine learning-based analytics approaches be met with skepticism by providers worried about the quality of the results or increase the potential of being flagged for audit by CMS?
Review of medical records to identify patient conditions is a pretty expensive and labor-intensive task so it’s not surprising that the idea of automated analysis is appealing. So how does medical record analysis work anyway? The simple answer is that the text of medical records, sometimes it is even in handwriting, is parsed by trained machine learning algorithms to identify words or phrases that imply a condition that may have been missed or even where a diagnosis was not observed or miscoded. The more complex answer is that a lot of data was curated, tagged based upon certain requirements, and was used to train machine learning systems. The systems, armed with real and validated medical diagnoses, can identify key attributes of patient charts that can reliably point to the presence of specific medical conditions. Once armed with this knowledge, these systems can parse millions of pages of records, identifying specific data that indicates the presence of a missed diagnosis. These systems can be used to audit records for quality of care, as is common for HEDIS audits but they can also be applied to revenue cycle processes as is the case here.
It is complex stuff and ultimately no automated solution can ever be 100% accurate which could have the effect of producing erroneous evaluations of a given patient’s true health status. Given the relative complexity of medical records, it is any organization seeking to completely offload the work of review without paying close attention to how the systems are trained and measured would be ill-advised. And no automated review process should go left unattended.
There are ways to combat the potential of fraud, whether it results from the supposed use of advanced chart analytics or manual review, and it all comes down to having submissions that are auditable and verifiable. It is not unreasonable for CMS to require submission, along with new charges based upon supposed changes of a risk score, the corpus of medical record data, and output of the system which was used to make the revised charges. This data could be run through a similar automated process to observe any significant differences. Comparing the output of the plan’s data with the audit systems results, a small percentage of discrepancies would be chalked up to differences in analytic systems used while large deviations of output would signal potential fraud.
The reality is that healthcare organizations and plans should be able to avail themselves of technology that ensures proper payment without fear of lawsuits and it’s easy to be fooled out there. A valid and bulletproof strategy for using automation is to ensure that the technology and processes employed should be designed to be verifiable, both internally and by third parties. Here is what you need.