This is a great story today from Lauren Clason at Bloomberg on litigation over claim denials in which AI was used to process the claims. There is a lot to be said about this issue, and I wish I had more time to write on it today. However, I have a brief due on Friday and no time today for a thousand word post on the issues raised by the use of AI in this context (hmm – perhaps I shouldn’t have spent five hours yesterday writing an article on LinkedIn and then I would have had time to write further on this one right now). I suspect, however, that this will not be anywhere near my last opportunity to write about this subject.

For now, though, I wanted to flag one point that the inevitable use of AI to process claims raises. I can see no legitimate argument against the use of AI claims processing – subject to appropriate testing for accuracy, errors, gender or other biases, etc. – when it comes to purely rules based decisions, in other words when there is nothing more to it than does the submitted claim match or instead violate the written and clear rules of the plan. For instance, where the entire inquiry is to the effect that the plan only covers X medicine after the following three alternative treatments or generics have been tried and certified by the treating physician as ineffective, and the question is whether the claim submission satisfies those plain and unambiguous terms.

But many, many types of claims, in all sorts of contexts, require interpretation of plan terms or even insurance policy terms, and even more require some sort of judgment call as to the relationship between a particular fact pattern and the plan or policy terms at issue. Discretion is not only the better part of valor, but in many instances it is the better part of claims handling as well. An AI rule based system that denies claims where judgment and discretion are needed to make the call raises a host of problems, including whether denials are accurate, whether a state’s claims handling statues and regulations are being complied with in that circumstance (in Massachusetts, for instance, a judgment call made on a claim is defensible in court so long as it is reasonable, which is a characterization of human decision making and possibly not – without sufficient human oversight – of any AI based claim system), and whether a claim denial under an ERISA plan is arbitrary and capricious (it is hard to see how a denial that requires a judgment call on the reading of plan language or the application of unclear facts cannot be arbitrary and capricious if made without human involvement or at least intervention after the decision by an AI process).

I am not a Luddite in any manner and there is certainly a role for AI in aspects of claims processing. But as a be all, end all solution? That is going to need more vetting than I suspect has yet occurred.