AI in the Federal Government: RaLytics™ Technology Platform and Policy-Driven AI Tools are Designed to Help Pave the Way Forward

Oct 15, 2020 | Artificial Intelligence

Overview/Background

In this blog, we discuss several implications of the use of artificial intelligence (AI) by the federal government, focusing on the health sector. As in the private sector, AI is being increasingly turned to by the federal government as a way to innovate existing technology and processes, as well as to make things possible that were not before. Moreover, there is a belief that AI can help achieve improved and more-informed decision-making at lower costs and reduced resource use.

While the use of AI is growing, the field is still in a nascent mode, and many important questions need to be addressed. For example, the industry is still figuring out the scope of AI’s applicability. There are also ethical concerns that the government, in particular, will need to deal with, including how to ensure transparency and accountability. Plus, many questions that we do not currently even know to ask are likely to come up as the use of AI spreads.

The fact that AI is already being used by government, as well as private industry, increases the urgency to begin answering those questions. This blog aims to do three things:

  • Provide a high-level overview of various areas in which AI is being used by the federal government, focusing on applications relevant to administrative operations by the Centers for Medicare & Medicaid Services (CMS).
  • Describe some of the specific challenges that have already been identified and will need to be addressed in regards to ensuring efficient and ethically sound use of AI.
  • Describe how the RaLytics™ technology platform is designed to address many of these challenges, while leveraging AI to innovate how health care providers and payers conduct risk adjustment.

AI at the Centers for Medicare & Medicaid Services

The current portfolio of AI used by the federal government is diverse. A recent assessment identified nearly half of federal agencies studied as having experimented with AI. Examples in CMS include detecting errors and fraud in medical claims and enrollment records. CMS has already engaged contractors to help build and implement a machine-learning-based risk assessment tool that analyzes historical and incoming claims to furnish provider-level leads to the agency’s fraud investigators. RaLytics, in its development of advanced AI technology, is at the forefront of educating federal officials and healthcare stakeholders of the benefits of AI.

CMS is also actively exploring how AI can improve the identification of high-risk patients and prevent adverse medical events, such as unplanned hospital or skilled nursing facility admissions, as well as adverse drug events. Other federal government agencies have been employing AI to develop vaccines, triage patients, and determine diagnoses from radiological imaging. In the not-so-distant future, possibilities for AI in health care include remote patient monitoring, mobile health, and evaluation of the efficacy and value of treatments and services.

With federal budget deficits growing, agencies like CMS are under tremendous pressure to reign in spending, especially considering the growing number of people qualifying for Medicare, Medicaid, and other government-provided health programs. One opportunity to reduce costs is with regards to systemic fraud, waste and abuse (FWA), currently estimated at 10% of CMS’ spending. AI holds the promise to bridge the gap via streamlined processes and reduced personnel needs, all while still improving the quality of care for patients. However, as noted in the recent and seminal report submitted to the Administrative Conference of the United States (ACUG)—Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies—while AI use is pervasive throughout federal administrative agencies, there is still “a long way to go” in unlocking its promise. The ACUG report draws attention to the fact that even the AI that has been explored to date by federal agencies is generally lacking in technical sophistication. This does offer the opportunity for the government to embrace AI in a way that can immediately deliver on AI’s potential to reduce the cost of core governance functions while improving decision-making and outcomes for the people it serves.

Questions that Will Need to be Addressed as AI Moves into the Future

In the upcoming years, we can expect AI to become more integrated across the health care industry, from drug and device development to long-term patient monitoring. For example, machine learning and other AI technology can help the industry reduce development timelines while improving risk analyses, bringing effective treatments to market quickly and safely. AI also has the potential to help government agencies make better regulatory decisions, streamline services, and better use taxpayer money.

However, supporting these emerging technologies will require changes in regulation, research and development protocols, health care provider behaviors, and government agency processes. Agencies that use AI to realize these gains will also confront important questions about the proper design of algorithms and user interfaces, the respective scope of human and machine decision-making, the boundaries between public actions and private contracting, and whether the use of AI should be permitted. These are and will continue to be important issues for public debate and academic inquiry.

As AI technology continues to be developed and operationalized, there are many concerns, both perceived and legitimate, regarding the utilization of AI in agency administration. Four key issues that the federal government will play a pivotal role in include:

  • Transparency / Accountability. As noted in the report to ACUG, AI algorithms being used today are often inscrutable; even a system’s engineer may not understand how it arrived at a particular output or be able to isolate the data driving the result. On the other hand, AI applications also have the potential to render decision-making more tractable then dispersed human judgements currently used to make decisions. Transparency and accountability are crucial for government operations.
  • Impact on Humans. As AI utilization grows, there is popular concern that many human jobs would be substantially modified or eliminated. These concerns are largely unfounded, with most experts agreeing that AI will change the nature of human work rather than reduce or eliminate it. With mundane and tedious tasks largely automated, the work performed by humans would be more sophisticated and meaningful, and therefore more fulfilling. However, the concern with human job losses is pervasive enough that the government will need to do all it can to address it.
  • Distributive/Gaming Concerns. Given the complexity of AI and the uncharted territory, firms with more capital and resources may be better positioned to take advantage of its potential. On the other hand, smaller organizations may have the most to gain. The federal government will have to consider how the benefits of AI can be widely accessible. This includes ensuring that certain users will not be able to manipulate the technology for unintended and inequitable gains. Once again, the aforementioned focus on transparency and accountability will play a role in insuring a fair and equal playing ground for all.
  • Regulatory Permissibility. Lawmakers have yet to pass comprehensive legislation regulating AI’s impact on a variety of industries. Such uncertainty creates risk for organizations to heavily invest in AI technology. Existing requirements already hold back some organizations in the health care industry. For example, health care companies have been reluctant to use cloud environments for AI-related programs for fear of not meeting the Health Insurance portability and Accountability Act (HIPAA) standards for protecting a patient’s sensitive health data. In this case, the development of new technology has helped overcome this barrier as trusted clouds that are HIPAA compliant are becoming available. A prior blog also discusses how updated interoperability rules can help facilitate the sharing of data through clouds.

In the next section, we describe how a specific AI framework and technology being developed for Payers, Compliance Officers, and Providers can help pave the way to solutions for these issues.

RaLytics™ Solutions are Designed to be a Leading Example of AI in the Future

The RaLytics™  platform offers advanced capabilities that can dramatically eliminate the administrative burden found in the traditional contract-level RADV audits used by CMS as part of its efforts to ensure accurate payments to Medicare Advantage Organizations (MAOs). It avoids the resource intensive process of manual reviews, which significantly limits the amount of data that can be audited, made possible by the use of the RaLytics™ solution.

Powering the RaLytics™ platform is a robust software engine that utilizes AI —specifically deep learning and natural language processing—to conduct real-time medical record documentation evaluation. Through automation, the ingestion engine is able to retrieve medical records directly from electronic health records (EHRs). The tool identifies potentially relevant medical records by scoring input records to determine their relevancy in supporting the indicated health status of each beneficiary. The use of such an AI tool for risk adjustment has the potential to better validate the diagnostic relevance of medical records, to lower costs and minimize necessary resources for Payers to conduct audits (freeing up resources to focus on patient health), and to increase data security by limiting the exchange of personal health information.

The RaLytics™ solution has been designed to address the four AI concerns laid out above, while strategically equipping Payers with a pathway towards careful and controlled expansion of AI in the future.

  • Transparency / Accountability. The RaLytics™ platform is not an unsupervised or black box decision tool; rather, it uses data input and scoring models mimic (and automate) the data and coding conducted by human coders today – such that questions regarding what data is being considered, how it is evaluated ,and what constitutes a determination of “support” would be no different than what is done today.

Furthermore, the tool systemizes and documents these processes, resulting in a simple zero to one score – where higher scores are associated with more available supporting data. In addition to the primary output indicating whether risk profiles can be supported by medical records, the tool provides metadata or statistics on what it finds and uses to calculate scores. This straightforward and actionable transparency ensures that the “machine” is not making any unsupervised decisions (minimizing challenges of accountability). Rather, it is generating data for CMS and MA plans to facilitate their successful participation in payment oversight activities.

  • Impact on Humans. RaLytics™ solutions are meant to expand, not replace, human capacity. Having humans in the loop—before, during and after AI projects become operational—ensures better integration into work processes and creates accountabilities for the performance of AI solutions. When utilizing the platform, no decisions are made without humans and humans play a large role in the development and maintenance of the approach.

The RaLytics™ platform uses human-reviewed medical records for model training and will continue to require human-reviewers for up-to-date data and ongoing (re)training and evaluation. However, because this reduces the need for manual reviews, current staff can review significantly more cases in a fraction of the time. This is particularly important as CMS seeks to begin auditing all MAO plans annually.

  • Distributive/Gaming concerns. By design, incorporation of the RaLytics™ platform into the current RADV process will address the uneven playing field related to resources to respond to RADV audits that are available to different plans. In this way, the tool makes it more feasible for smaller MA plans with fewer resources to conduct comprehensive risk adjustment reviews. That is, the system would reduce their burden and better equip smaller MAOs to participate in oversight activity (a struggle not shared by large plans) so that they, in turn, can focus on caring for patients, especially the older, higher risk Medicare patients.

Moreover, the platform enables the review of more comprehensive data through automated retrieval of raw data from 3rd party EHR systems. This reduces decision-making on subjective data summaries that may be more susceptible to gaming. In contrast, the traditional RADV process is likely to be more susceptible to fabrication of data as the process allows for a range of artifacts to be manually (and subjectively) compiled for review. Gaming would also be unlikely during AI-enabled audit processes because users do not enter any data in the system (other than a target beneficiary list) or influence the processing in any way.

  • Regulatory Permissibility. The RaLytics™ solution is specifically designed to work within the current regulatory framework. No regulatory changes are required. Moreover, use of the platform should not be mandatory, is secure and meets 508 accessibility requirements.   

All things considered, the potential benefits of expanding the role of AI in the federal government far outweigh the risks and concerns. As such, the RaLytics™ solution can serve as an ideal entre into AI by Payers, oversight agencies, and Providers. It is designed to grow in sophistication and application over time as the algorithms become more refined and more plans participate. Thus, this powerful tool can bring immediate benefits that can grow over time.