Researchers who had been using Fitbit data to help predict surgical outcomes now have a new method to gauge how patients may recover from spine surgery more accurately.
Using machine-learning techniques developed at the AI for Health Institute at Washington University in St. Louis, Chenyang Lu, the Fullgraf Professor at the university’s McKelvey School of Engineering, collaborated with Jacob Greenberg, MD, an assistant professor of neurosurgery at the School of Medicine, to develop a way to predict recovery from lumbar spine surgery more accurately.
The results, published in the journal Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, show that their model outperforms previous models to predict spine surgery outcomes. This is important because, in lower back surgery and many other orthopaedic operations, outcomes vary widely depending on the patient’s structural disease and varying physical and mental health characteristics across patients.
Surgical recovery is influenced by physical and mental health before the operation. Some people may have excessive worry in the face of pain, which can make pain and recovery worse. Others may suffer from physiological problems that worsen pain. If physicians can get a heads-up on a patient's various pitfalls, they can better tailor treatment plans.
“By predicting the outcomes before the surgery, we can help establish some expectations, help with early interventions, and identify high-risk factors,” said Ziqi Xu, a PhD student in Lu’s lab and the first author.
Previous work predicting surgery outcomes typically used patient questionnaires given once or twice in clinics, capturing a static slice of time.
“It failed to capture the long-term dynamics of physical and psychological patterns of the patients,” Xu said. Prior work training machine-learning algorithms focused on just one aspect of surgery outcome “but ignored the inherent multidimensional nature of surgery recovery,” she added.
Getting a ‘big picture’ view
Researchers have used mobile health data from Fitbit devices to monitor and measure recovery and compare activity levels over time. But Greenberg said this research has shown that activity data, plus longitudinal assessment data, is more accurate in predicting how the patient will do after surgery.
The current work offers a “proof of principle” showing that, with multimodal machine learning, doctors can see a more accurate “big picture” of the interrelated factors that affect recovery. Before beginning this work, the team laid out the statistical methods and protocol to ensure they were feeding the artificial intelligence system the right balanced data diet.
Previously, the team had published work in the journal Neurosurgery showing for the first time that patient-reported and objective wearable measurements improve predictions of early recovery compared to traditional patient assessments. In addition to Greenberg and Xu, Madelynn Frumkin, a PhD student studying psychological and brain sciences in Thomas Rodebaugh’s laboratory in Arts & Sciences, was co-first author of that work. Wilson “Zack” Ray, MD, the Henry G. and Edith R. Schwartz Professor of neurosurgery at the School of Medicine, was co-senior author, along with Rodebaugh and Lu. Rodebaugh is now at the University of North Carolina at Chapel Hill.
In that research, they show that Fitbit data can be correlated with multiple surveys that assess a person’s social and emotional state. They collected that data via “ecological momentary assessments” (EMAs) that employ smartphones to give patients frequent prompts to assess mood, pain levels and behavior multiple times throughout day.
“We combine wearables, EMA and clinical records to capture a broad range of information about the patients, from physical activities to subjective reports of pain and mental health, and to clinical characteristics,” Lu said.
Greenberg added that state-of-the-art statistical tools, such as “Dynamic Structural Equation Modeling,” which Rodebaugh and Frumkin have helped advance, were key in analyzing the complex, longitudinal EMA data.
Aim to improve long-term outcomes
For the most recent study, they took all those factors. They developed a new machine-learning technique of “Multi-Modal Multi-Task Learning” to effectively combine these different types of data to predict multiple recovery outcomes.
In this approach, the AI learns to weigh the relatedness among the outcomes while capturing their differences from the multimodal data, Lu added.
This method takes shared information on interrelated tasks of predicting different outcomes and then leverages the shared information to help the model understand how to make an accurate prediction, according to Xu.
It all comes together in the final package, producing a predicted change for each patient’s post-operative pain interference and physical function score.
Greenberg said the study is ongoing as the researchers continue to fine-tune their models so they can take more detailed assessments, predict outcomes and, most notably, “understand what types of factors can potentially be modified to improve longer-term outcomes.”