Trust The AI? You Decide




Earlier this year, I wrote about fatal flaws in algorithms that were developed to mitigate the COVID-19 pandemic. Researchers found two general types of flaws. The first is that model makers used small data sets that didn’t represent the universe of patients which the models were intended to represent leading to sample selection bias. The second is that modelers failed to disclose data sources, data-modeling techniques and the potential for bias in either the input data or the algorithms used to train their models leading to design related bias. As a result of these fatal flaws, such algorithms were inarguably less effective than their developers had promised.

Now comes a flurry of articles on an algorithm developed by Epic to provide an early warning tool for sepsis. According to the CDC, “sepsis is the body’s extreme response to an infection. It is a life-threatening medical emergency and happens when an infection you already have triggers a chain reaction throughout your body. Without timely treatment, sepsis can rapidly lead to tissue damage, organ failure, and death. Nearly 270,000 Americans die as a result of sepsis.”

Essentially, Epic’s algorithm is designed to predict the possibility of sepsis in a timely manner, while it is still treatable, thereby saving lives. But although sepsis is obvious after the fact, it is difficult to diagnose as it happens, complicating the algorithm’s task.

The articles that I read represented a broad range of perspectives. At first, there were those that talked about all that the model did right. Critical Care Medicine published a study showing that Epic’s algorithm helped patients receive timely antibiotics treatment, in turn leading to shorter hospital stays. The algorithm alerted caregivers, who then took the appropriate action.

On the other hand, a study in JAMA Internal Medicine suggests that the model does a poor job in predicting sepsis. The researchers found that the predictor didn’t identify two-thirds of sepsis patients and frequently “cried wolf,” registering false positives. In fact, the study goes on to lament that given such widespread adoption of a faulty algorithm, we should all be concerned about how well we’re managing sepsis at a national level. After all, one in three patients who die in a hospital do so because of sepsis.

There are many theories as to why Epic’s algorithm is so ineffective—from the use of incorrect key variables in the modeling exercise to the need to recalibrate these models in differing environments. I also came across several arguments on the model’s intended use. Is it a general-purpose predictor across all patients or intended to be used when other means of diagnosis fail? Is the model standalone, or should it be used in conjunction with human intervention and other external signals? And, stepping back, should there ever be a role for proprietary models in healthcare? Especially machine learning-driven algorithms, which tend to have less transparency and are therefore subject to less external scrutiny.


In my mind, this narrative raises key issues about Trust in AI. If you’re a clinician or a physician, would you trust this AI?

Clearly, sepsis treatment deserves to be focused on, which is what Epic did. But in doing so, they raised several thorny questions. Should the model be recalibrated for each discrete implementation? Are its workings transparent? Should such algorithms publish confidence along with its prediction? Are humans sufficiently in the loop to ensure that the algorithm outputs are being interpreted and implemented correctly? And if one or more of these aren’t met, should one distrust the algorithm altogether? No algorithm is ever perfect. If it does better than today’s correct diagnosis rate, then is it worth it? How much better?

Perhaps Epic could have done more to build trust. How? By ensuring that several independent studies confirmed the results and in addition open the model more to scrutiny. And the trust could be even more enhanced if there was an independent agency that certified such models.

Source: https://www.forbes.com/sites/arunshastri/2021/11/02/trust-the-ai-you-decide/?ss=ai&sh=4708ea3c549e


 

No comments

intech company. Powered by Blogger.