Confidential Deep Learning on End-user Devices
About
Performing deep learning on end-user devices provides fast offline inference results and can help protect the user’s privacy. However, running models on untrusted client devices reveals model information which may be proprietary, i.e., the operating system or other applications on end-user devices may be manipulated to copy and redistribute this information, infringing on the model provider’s intellectual property. CAPR-DL leverages the use of ARM TrustZone, a hardware-based security feature present in most phones, to confidentially run a proprietary model on an untrusted end-user device.
Publications
-
Providing User-Controlled Privacy and Model Confidentiality with On-Device Deep Learning. Jean-Baptiste Truong, Peter M. VanNostrand, Ioannis Kyriazis, Michelle Cheng, Tian Guo, Robert J. Walls. Under Submission.
-
Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices. Peter M. VanNostrand, Ioannis Kyriazis, Michelle Cheng, Tian Guo, Robert J. Walls. Great Lakes Security Day. (Oral presentation)
-
Confidential Deep Learning: Executing Proprietary Models on Untrusted Devices.
Peter M. VanNostrand, Ioannis Kyriazis, Michelle Cheng, Tian Guo, Robert J. Walls. arXiv:1908.10730. (Paper)
Project Personnel
Graduate Students
- Jean-Baptiste Truong jtruong2@wpi.edu
- Alex Kasparek ajkasparek@wpi.edu
Undergraduate Students
- Peter M. VanNostrand pmvannos@buffalo.edu
- Ioannis Kyriazis ikyriazis@wpi.edu
- Michelle Cheng miche2lec@gmail.com
Principle Investigators
- Robert J. Walls rjwalls@wpi.edu
- Tian Guo tian@wpi.edu
Acknowledgements
Undergraduate researchers were supported by NSF REU programs under grants CNS-1852498, CNS-1560229 (WPI Data Science), CNS- 1755659, and CNS-1815619.