PLoS Comput Biol. 2025 Nov 24;21(11):e1013734. doi: 10.1371/journal.pcbi.1013734. Online ahead of print.
ABSTRACT
Liver transplantation is almost the only way to save patients with end-stage liver disease. Particularly, living donor liver transplantation (LDLT) has gained importance in recent years thanks to the shorter waiting times and better graft quality than with deceased donor liver transplantation (DDLT). However, some patients experience graft loss due to unexpected infections, sepsis, or immune-mediated rejection of the transplanted organ. An urgent need exists to clarify which patients experience graft loss. Several models have been proposed, but most analyze the classic DDLT, and knowledge about LDLT is lacking. In this study, we retrospectively analyzed clinical data from 748 patients who underwent LDLT. By adapting machine learning methods, we predicted early graft loss (within 180 days postoperatively) with better performance than conventional models. The model enabled us to stratify a highly heterogeneous sample of patients into five groups. By focusing on survival time, we next categorized the patients into three groups with early, intermediate, and late or no graft loss. Notably, we identified the intermediate-loss group as a distinct population similar to the early-loss population but with different survival times. Additionally, by proposing a hierarchical prediction method, we developed an approach to distinguish these populations using data up to 30 days postoperatively. Our findings will enable the early identification of individuals at risk of graft loss, particularly those in the early- and intermediate-loss groups. This will allow for appropriate patient care, such as switching to DDLT, identifying other living donors for LDLT, or preparing for re-transplantation, leading to a bottom-up improvement in transplant success rates.
PMID:41284646 | DOI:10.1371/journal.pcbi.1013734