Researchers at OSF HealthCare want to make sure that patients have “important conversations” about their plans for the end of their lives.
Only 22% of Americans write down their end-of-life plans, according to study. A team at OSF HealthCare in Illinois is using artificial intelligence to help doctors figure out which patients are more likely to die during their hospital stay.
A news statement from OSF says that the team made an AI model that can predict a patient’s risk of dying between five and ninety days after being admitted to the hospital.
The goal is for the doctors to be able to talk to these people about important end-of-life issues.
In an interview with Fox News Digital, lead study author Dr. Jonathan Handler, an OSF HealthCare senior fellow of innovation, said, “It’s a goal of our organization that every single patient we serve would have their discussions about advanced care planning written down so that we could give them the care they want, especially at a sensitive time like the end of their life when they may not be able to talk to us because of their medical condition.”
If a patient is asleep or on a respirator, for example, it may be too late for them to tell their doctors what they want.
Handler said that in an ideal world, the mortality prediction would keep patients from dying before they got the full benefits of the hospice care they could have gotten if their goals had been written down sooner.
Since the average length of a hospital stay is four days, the researchers decided to start the model at five days and end it at 90 days to give a “sense of urgency,” as one researcher put it.
The AI model was tried on a set of data from more than 75,000 people of different races, cultures, genders, and social backgrounds.
The study, which was just released in the Journal of Medical Systems, showed that the death rate for all patients was 1 in 12.
But for people who the AI model said were more likely to die while they were in the hospital, the death rate went up to one in four, which is three times higher than the average.
The model was tried before and during the COVID-19 pandemic, and the results were almost the same, according to the study team.
Handler said that 13 different kinds of patient information were used to teach the patient death estimator how to work.
“That included clinical trends, like how well a patient’s organs are working, as well as how often and how intensely they’ve had to go to the health care system and other information, like their age,” he said.
Handler said that the model gives a doctor a chance, or “confidence level,” as well as an account of why the patient has a higher-than-normal chance of dying.
“At the end of the day, the AI takes a lot of information that would take a clinician a long time to gather, analyze, and summarize on their own, and then presents that information along with the prediction to allow the clinician to make a decision,” he said.
Handler said that a similar AI model made at NYU Langone gave the OSF researchers an idea of what they could do.
“They had made a death predictor for the first 60 days, which we tried to copy,” he said.
“We think our population is very different from theirs, so we used a different kind of predictor to get the results we wanted, and we were successful.”
“Then, the AI uses this information to figure out how likely it is that the patient will die in the next five to ninety days.”
The forecast “isn’t perfect,” Handler said. Just because it shows a higher risk of death doesn’t mean it will happen.
“But at the end of the day, the goal is to get the clinician to talk, even if the predictor is wrong,” he said.
“In the end, we want to do what the patient wants and give them the care they need at the end of life,” Handler said.
OSF is already using the AI tool because, as Handler said, the health care system “tried to integrate it as smoothly as possible into the clinicians’ workflow in a way that helps them.”
Handler said, “We are now in the process of optimizing the tool to make sure it has the most impact and helps patients and clinicians have a deep, meaningful, and thoughtful conversation.”
Expert on AI points out possible limits
Dr. Harvey Castro, a board-certified emergency medicine doctor in Dallas, Texas, and a national speaker on AI in health care, said that OSF’s model may have some benefits, but it may also have some risks and limits.
Possible fake results is one of them. “If the AI model wrongly predicts that a patient is at a high risk of dying when they are not, it could cause the patient and their family needless stress,” Castro said.
Castro also brought up the risk of false positives.
“If the AI model doesn’t find a patient who is at high risk of dying, important conversations about end-of-life care might be put off or never happen,” he said. “If this happens, the patient might not get the care they would have wanted in their last days.”
Castro said that other possible risks include relying too much on AI, worrying about data privacy, and the possibility of bias if the model is built on a small set of data. This could lead to different care advice for different patient groups.
The expert said that these kinds of models should be used with human contact.
“End-of-life conversations are difficult and can have big effects on a patient’s mind,” he said. “People who work in health care should use AI predictions along with a human touch.”
The expert said that these models need to be constantly checked and given feedback to make sure they are still accurate and useful in the real world.
“It is very important to study AI’s role in health care from an ethical point of view, especially when making predictions about life and death.”