Brendan's Blog

9. Ethical Considerations for AI in Health Insurance Distribution

Written by Brendan McLoughlin | Jun 12, 2024 10:07:00 PM

Brendan McLoughlin, President of e123, is participating in an executive education course at the Massachusetts Institute of Technology on Artificial Intelligence (AI) and its implications for business strategy. This is the ninth in a series of blog posts where he shares the insights he is gaining and how they apply to health insurance distribution.

Much has been written about the potential for Artificial intelligence (AI) in health insurance, and there is certainly great promise for AI to significantly  improve the sales, marketing and distribution in the health insurance value chain. But with increased power comes additional responsibility, and nowhere is this more true than when contemplating the ethical concerns of using AI in the context of health insurance.

AI is truly a dual-edged tool - powerful in its ability to use data to improve business outcomes, but also fraught with ethical and regulatory implications. As business leaders, it is vital to understand the ways AI can be used and misused in health insurance applications.

Privacy, Security and Compliance

No three words strike fear into health insurance executives more so than these: privacy, security and compliance. Health insurance is held to a higher standard than other industries, and these concerns should be paramount when considering any AI implementation. And complicating matters, even the institutions whose job it is to regulate health insurance are unsure of the guidance. A recent publication on AI compliance by the National Institutes of Health flatly stated “It is true that HIPAA does not provide clear guidelines for compliance,” but goes on to suggest that “AI developers and vendors should treat health data in a way that would be most compliant with not just the letter of HIPAA but with its spirit and purpose.” This creates a world of uncertainty where despite this lack of concrete guidance, insurance carriers must still hold themselves accountable to the highest standards.

Remembering that AI algorithms do what they do by using
training data to “learn” the appropriate responses, it is important that compliance is a top of mind concern starting with the data going into the AI project, not just the data coming out. Obviously, setting and enforcing policies regarding privacy, security and compliance are up to each individual insurance carrier, but at the very least, working only with technology vendors who are “SOC2+HIPAA” compliant will increase the odds of a well-architected, compliant AI implementation.

Bias and Discrimination

It is well documented that AI implementations in healthcare have been beset with issues of bias, where AI algorithms produce different levels of results for different patient groups based on gender, ethnicity or economic status. In these instances, the term “bias” is especially appropriate, since the cause of the problem once again can be sourced back to the training data. If, for example, a diagnostic machine learning (ML) algorithm is trained with data that is heavily sourced from free clinics, it is not difficult to imagine that the algorithm will “learn” that social-economic factors have a disproportionate impact on diagnosis. 

The challenge with bias and discrimination is that there is no way to flag it within the AI algorithm itself. As far as the algorithm is concerned, it has learned meaningful cause and effect relationships from the training data, and it has no ability to sense that the results it produces are skewed. 

To mitigate this bias, an important first step is to thoroughly analyze all training data, looking in advance for issues that could lead to discriminatory results. And while including diverse datasets and perspectives in AI development can help lessen bias, the only way to be sure is regular, thorough, on-going audits of AI algorithm results. It is a continuous commitment that must be baked into the ROI of any AI project. Especially in the context of health insurance, AI will never be “set it and forget it”.

What is the Goal Anyway?

A major source of ethical concern with AI implementations starts long before any data is gathered or code is written.  As we have previously discussed, every new technology initiative should start with a strategic goal - what are we trying to accomplish as a business. Setting clear goals is a key step to making sure that technology investments have a positive impact on company value, and any discussion of the ethics of AI should start with these goals. 

We advise that health insurance distribution organizations should prioritize an overall goal of optimizing customer lifetime value over customer acquisition costs (LTV/CAC). If costs are lowered while customer retention is increased, the overall impact on the business, its customer and stakeholders should all be positive. Contrast this with a goal such as optimizing quarterly total shareholder return (TSR), which, while important, could drive results and unintended consequences that hurt customers, employees and ultimately the business in the long run.

Accountability and transparency are keys to ensuring proper goal setting from the very outset of any AI project. Both business and technology leadership must be aligned and held accountable for AI projects, their stated objectives and the results they generate. Projects should be open, visible processes where key players from Sales, Operations, HR, Compliance and Technology have an opportunity to review training data and algorithmic outputs. AI projects managed in silos are much more likely to produce risky outcomes, and coordination and commitment from both business and technology executives is key to creating long term enterprise value. 

Conclusion

The integration of AI in health insurance distribution offers significant potential for improving sales, marketing, and operational efficiency. However, this promise comes with considerable ethical responsibilities. Ensuring privacy, security, and compliance is paramount, especially given the stringent standards in health insurance. Issues of bias and discrimination in AI are well-documented, stemming from skewed training data, and require continuous audits and diverse datasets to mitigate. Strategic goal-setting is crucial; health insurance firms should focus on optimizing customer lifetime value over short-term financial metrics to ensure long-term benefits for all stakeholders. Accountability from both business and technology leaders is essential to align AI projects with these ethical goals, fostering responsible innovation in the industry.

Want to learn more about the future of AI in insurance distribution? Get in touch here. For prior posts in this series, click here or below: