How Medical Reasoning is Changed by Prediction Modeling from Artificial Intelligence / Machine Learning
The challenge we face with the tsunami of artificial intelligence is to stay focused on the real-world problems and challenges and not get distracted by the problems caused by AI itself.
Many of the Big Tech companies have presumably large numbers of people focused on the tech, but the many care providers and payers struggle to figure out what the value is, when the sales pitches are all about the tech.
So let’s talk about the real world problems and better medical reasoning and how machine learning and prediction will help.
Some starting points:
- This is a non-trivial problem, as the source of medical errors by doctors and other clinical decision makers often lies in various psychological heuristics which hamper decision making.
- Many health care providers prioritise speed over effectiveness and efficacy and struggle with unplanned emergency readmissions or prematurely discharged patients.
- The essence of diagnosis and treatment is selection options that as closely as possible anticipate where the patient’s trajectory is going. But doctors are not oracles and they lack clinically useful prediction tools apart from their own clinical judgment.
How do AI predictions change medical reasoning?
Artificial intelligence, machine learning, and prediction machines offer ways to identify, often complex, patterns in a patient’s vital signs and associated environmental data that could be early evidence of a need to act (what are called red flags or signal events), and to do this more quickly and to higher probability than medical reasoning without AI (that is, un-augmented reasoning).
As a generality, doctors are ‘right’ 75–85% of the time, while AI can take this probability close to 100%, let’s say 99%, which is a substantial, almost 30% improvement in predictive accuracy. A doctor having access that source of additional information (called augmented reasoning) cannot just treat a patient better, but for example avoid medical errors as the difference between 75/80% and 100% is where a lot of medical errors lie, and I don’t just mean prescribing errors, but systemic and cognitive errors (see my paper on medical errors and AI here: https://ideas.repec.org/a/for/ijafaa/y2020i58p27-35-.html
What a prediction in AI does is augment medical reasoning; it does not replace the doctor. It can also serve to involve a wider pool of clinical talent as the AI can also augment the decision making of physiotherapists, pharmacists, etc. These benefits can be generalised to include organisational decisions or priority setting and resource allocation, for the mangers in the audience.
In addition, these probability-based predictions can reveal hidden patterns in the clinical data and can alter the ranking of decision points in the clinical reasoning decision tree. In particular, it can elevate highly predictive features above ones that doctors usually use as well as identify novel factors that the doctor may view as lower utility.
The AI prediction can signal actions for attention, whether the need for a home visit, or the trending of vital signs toward a likely future exacerbation where the patient’s condition is likely to deteriorate; yes, there are precurors events that usefully signal these features in your health, or warning signs if you prefer).
What would it be worth to know today that a patient has a high probability of a critical event tomorrow? I suspect a lot, as much emergency care is more about catch up, which is why ambulances drive fast with flashing lights as they are trying to beat the passage of time toward that critical event.
This has the additional advantage of moving clinical decision making from the coalface of urgency or emergency back toward calmer waters, where decision making can be more certain and less frenetic.
So whither AI in medicine?
No, Virginia, there is no Santa Claus and No, AI will not make doctors obsolete (despite Maxman’s prediction in his book, The Post Physician Era).
Prediction is at the heart of medical reasoning, usually framed as clinical judgment. These predictions draw on mental frameworks and patterns of experience (i.e. judgment) in the doctor’s brain. AI will add a ‘digital second opinion’ or ‘digital clinical colleague’ to the current list of health care professions.
This will elevate the clinician’s reasoning by reducing the risk of medical errors, the visible evidence where predictions have failed.
By the way, AI should feature in labour force projects of the future workforce in health care, but I doubt the people building these labour force projections know how to do this.