12–17 Jul 2026
University of Graz
Europe/Vienna timezone

Enhancing Patient Privacy and Model Interpretability with Tensor Trains

15 Jul 2026, 08:30
20m
15.06 - HS (University of Graz)

15.06 - HS

University of Graz

92
Contributed Talk Mathematical Oncology Contributed Talks

Speaker

Juliette Sinnott (University of Waterloo)

Description

When applying Machine Learning (ML) to medical problems, privacy and interpretability are of utmost concern. The inner workings of the model need to be well understood to strengthen user trust, and any patient data used in training should be fully anonymized. These two values can work in opposition to each other, as transparent models like Logistic Regression (LR) can unintentionally leak information about who was included in the training procedure. We propose the application of Tensor Trains (TTs),which were originally designed for use in quantum physics, to remedy this issue \cite{Monturiol et al.}. Any ML model, including Neural Networks (NNs), can be decomposed into TT format; effectively obscuring training information while maintaining predictive performance and interpretability. We demonstrate this on published LR and NN models designed to predict immunotherapy responses \cite{Chang et al.}. We show how employing TTs on these models decreases the accuracy of Membership Inference Attacks \cite{ Shokri et al.}. Furthermore, we demonstrate how to extract biological insight from these more private models, including computing feature importance, examining the monotonicity of predictions, and even recovering LR coefficients. These insights are not immediately available in most models, suggesting that TTs have significant interpretability and privacy benefits.

Bibliography

@article{Chang et al., title={LORIS robustly predicts patient outcomes with immune checkpoint blockade therapy using common clinical, pathologic and genomic features}, volume={5}, ISSN={2662-1347}, url={https://www.nature.com/articles/s43018-024-00772-7}, DOI={10.1038/s43018-024-00772-7}, number={8}, journal={Nature Cancer}, author={Chang, Tian-Gen and Cao, Yingying and Sfreddo, Hannah J. and Dhruba, Saugato Rahman and Lee, Se-Hoon and Valero, Cristina and Yoo, Seong-Keun and Chowell, Diego and Morris, Luc G. T. and Ruppin, Eytan}, year={2024}, month=june, pages={1158–1175}, language={en} }

@article{Shokri et al., title={Membership Inference Attacks against Machine Learning Models}, url={http://arxiv.org/abs/1610.05820}, DOI={10.48550/arXiv.1610.05820}, note={arXiv:1610.05820}, number={arXiv:1610.05820}, publisher={arXiv}, author={Shokri, Reza and Stronati, Marco and Song, Congzheng and Shmatikov, Vitaly}, year={2017}, month=mar }

@article{Pareja et al., title={Tensorization of neural networks for improved privacy and interpretability}, volume={8}, ISSN={2666-9366}, url={https://scipost.org/10.21468/SciPostPhysCore.8.4.095}, DOI={10.21468/SciPostPhysCore.8.4.095}, number={4}, journal={SciPost Physics Core}, author={Pareja Monturiol, José Ramón and Pozas-Kerstjens, Alejandro and Pérez-García, David}, year={2025}, month=dec, pages={095}, language={en}}

Author

Jose Ramon Pareja Monturiol (Universidad Complutense de Madrid)

Co-authors

Juliette Sinnott (University of Waterloo) Roger Melko (Perimeter Institute for Theoretical Physics) Mohammad Kohandel (University of Waterloo, Canada)

Presentation materials

There are no materials yet.