Moving AI models from development to production has its own set of challenges and risks when the Edge is involved.
In this laboratory we're going to learn how to overcome these challenges leveraging the Nvidia Triton™ Inference Server, Eclipse Kura and Eclipse Kapua, two frameworks for the Edge and Cloud.
This laboratory will give attendees a hands-on experience of the entire lifecycle of an Edge AI application, from data collection to training to model deployment.
We're going to build an anomaly detection model for a field appliance in Tensorflow Keras and learn what are the required steps to make it available through Nvidia Triton Inference Server, optimize it for improved inference times and deploying it on the edge using Nvidia devices for GPU-accelerated inference.
During the tutorial we'll focus on the process of creating a deep learning anomaly detector from scratch, leveraging the entire Eclipse ecosystem:
Data collection: Kura Wires for collecting diagnostic data from an appliance and upload on the Eclipse Kapua cloud
Training: How can we download the collected data from Kapua Cloud and develop a model within Tensorflow Keras
Inference: How can we export the trained model for running on an Inference Server (Nvidia Triton™ Server) which will be our target runtime.
Deployment: How can we deploy this trained model on the edge device leveraging the intuitive Kura Wires interface and running the models with Eclipse Kura's inference engine service.