In this talk, we will describe what kind of challenges the Edge AI presents from the Intellectual Property Protection perspective. How our models, the results of big investments in time and resources, are deployed on the field exposing them to the danger of being stolen by evil actors.
Our trained AI models run on a set of well-known, popular Deep Learning frameworks which cannot lock our models from being run against our will. Think about the following scenario: we have finally deployed our precious AI model on the field. Hundreds of devices performing inference are scattered in a big industrial park. What is preventing someone to take one apart and access its filesystem? What is preventing someone to take our model and run it on its machine leveraging the same framework we use in production?
During this talk, we will describe an example scenario where a simple prosthetic hand is driven by an AI model performing inference on a Webcam stream. The AI model will perform hand pose estimation on the webcam stream and provide the data to an ESF instance which will drive the prosthetic hand. We will describe how Everyware Software Framework (ESF) can protect the AI models deployed on the edge device by integrating with the Nvidia Triton Server, which is our Inference Server for the application.