The Interoperability Rules: Early Implementation Challenges

Jan 15, 2021 | Interoperability, Policy

Overview/Background

In a prior blog, we provided an overview of two important federal regulations that were finalized earlier in the year by the Office of the National Coordinator for Health Information Technology (ONC) and Centers for Medicare & Medicaid Services (CMS).  Together these are known as the interoperability rules.  Their goal is to ensure that different health information technology (IT) systems and software applications are able to access and exchange data accurately, seamlessly, and in a timely manner, and to use the shared information to optimize the health of individuals and populations.  In this blog, we will take a closer look at some of the anticipated challenges in implementation of the interoperability rules.

Timing

The first issue that comes to mind for many is the issue of timing and when the interoperability requirements are set to be implemented.  For many Health IT developers, as well as health care providers, health plans, and other health care organizations, compliance with the interoperability rules will require significant changes to existing systems and processes, or the creation of new ones. The challenge of meeting the timelines specified in the final regulations has been exacerbated by upended priorities during the COVID-19 pandemic.  In recognition of this, many of the initial compliance dates have already been pushed back several months.  For example, ONC will exercise discretion in enforcing almost all new requirements until three months after each initial compliance date or timeline identified in its final rule. We expect timing issues, including compliance dates, active enforcement timeframes, and the recognition of actual operational benefits, to remain in the forefront for the near future due to the continuing uncertainty of COVID-19, among other reasons.

Cost

Next, and inherently related to timing, is the issue of cost.  Meeting the standards outlined in the final rules will require large investments on the part of health plans, providers, and the health IT industry at large.  According to estimates developed by the federal government in the ONC final rule, the costs of initial implementation and on-going maintenance of the new requirements is estimated to be in the billions of dollars across the industry, with potentially thousands of hours in labor required per application. Estimated costs to health plans, hospitals, and other impacted organizations are expected to be higher in the first year as organizations create new resources to meet the interoperability requirements.  For example, CMS estimated that health plans will incur about $1.6 million, on average, in initial year development costs for new application programming interfaces (APIs), with on-going costs totaling around $200,000 or less annually per organization. While some organizations see a path towards down-the-road efficiencies and savings once these changes have been made and adopted, others have expressed concern or expectation these costs will be passed on to consumers in the form of higher prices for health care services and higher premiums.  Consequently, we expect those organizations which are able to position their costs to meet regulatory requirements within their broader development and process improvement efforts will actually benefit from compliance within the beneficiary-centric focus intended.

Protecting Sensitive Patient Information While Not Obstructing the Promise of Interoperability

Finally, and possibly one of the most notable issues, is the impact on or intersection with privacy and protection of patient data.  The interoperability rules offer the potential to allow the science of risk adjustment, predictive modeling, and other analytic processes to be unleashed on a substantially larger scale and closer to a real-time basis.  Historically, health plans, as well as hospitals and other provider organizations, hold on to information from medical records and other protected health information (PHI) very tightly.  For example, it currently can take days to get access to prior medical records, as the process often requires paper forms, signatures, and manual/human searches, as well as delivery methods which are not fully automated. Such processes make the data irrelevant by the time it is shared or impractical if trying to perform large scale analyses.  We see these effects clearly in the current risk adjustment data validation (RADV) process conducted by CMS for the Medicare Advantage program.  Because of the lack of connectedness across data systems and intensity of resource use required to obtain and review records, the process can result in millions of dollars in payments being determined by the review of just a few dozen records from each health plan audited. While, interoperability has the potential to revolutionize these processes, it will require health plans and providers to achieve an unprecedented level of openness while still ensuring privacy and security.  Connecting multiple, often disparate, data sources through the use of different portals, which are run by various users, creates many opportunities for errors in their implementation and vulnerabilities in protocols that can result in unauthorized disclosures of information.  Health plans and providers, as well as the health IT vendors working on their behalves, will need to excel at opening their data, identifying data covered under the interoperability rules, such as lab results, and then releasing it only to authorized parties.  At the same time, they will need to protect sensitive health information, such as a patient’s mental health and substance abuse history, as well as information about non-patients through the collection of family history information that can be found in a patient’s electronic medical records. During the public comment period on the interoperability rules, many commenters suggested that the use of common safeguards (e.g., multi-factor authentication, risk analysis, technical evaluations, audit, and encryption) be required. CMS and ONC by and large agreed with these concerns and have put a strong focus on privacy and security in the interoperability rules. However, the rules stop short of requiring specific privacy and security provisions such as multi-factor authentication and the encryption of authentication credentials.  They also stop short of mandating all of the use cases for which safe guards are applicable. Rather, IT developers will need to attest to the ability of their systems to incorporate such security and privacy measures and will also have the option to not use them in certain cases.  This approach allows for flexibility in a world of rapidly evolving technology capabilities in which it is unrealistic to come up with every potential use case a priori.  It also allows for flexibility in achieving the appropriate balance of permission and restriction required to gain the benefit of interoperability while also protecting patients’ information. In particular, key to ensuring large scale and efficient deployment of advances in risk adjustment and predictive analytics, is that the authorization and credentialing processes put in place are not more onerous than needed.  For example, machine learning and other artificial intelligence (AI) systems have the potential to incorporate all medical records for all health plan enrollees that are in an electronic format into risk adjustment auditing processes.  We discuss in a prior blog how the result of such AI systems would include more accurate health care payments, as well as substantially lower resource use and costs for health plans and providers, if broad data access was permissible and logistically possible.  However, the degree to which these efficiency gains are achieved will be influenced by the level and number of authentications required.  For example, requiring authentication for each health plan member could substantially limit use cases involving population-level analytics, such as those used for risk adjustment or public health efforts. The interoperability rules support a distributive data approach which limits the amount of raw data that is viewed or copied across systems.  This ensures that the raw data is not amended, but remains under the local control of the data owner.  Instead, queries are only allowed to look at the bare minimum data needed and hold it in memory long enough to evaluate it, before only keeping the “answer”, which can have a broader type of access with a lower permission burden.

Conclusion

The list above is not meant to be exhaustive of all the challenges ahead.  As implementation is still in the nascent stages, CMS and ONC will be tasked with continuous monitoring of the process to help ensure that the promise of interoperability will be met.  It is likely that adjustments to standards and requirements will be made as lessons are learned and new challenges appear.  Stayed tuned for future posts on the interoperability rules, as more experience is gained in implementation.