Environment perception is an important aspect of reliable and safe autonomous driving for automated vehicles. Typical AD software stacks provide localization, object detection and tracking using and array of sensor data. Sensor data is complemented with other information arriving from V2X connectivity, passenger behavior, person-to-device mapping and other networked data such as the urban context, degradation of traffic etc. All these components form a context for automating the driving and digitizing this context allows for selecting, predicting and adapting the dynamic set of services and applications that are deployed to the vehicle.
In this talk, we propose and describe models for defining context and services and how it can be used to identify and deploy services to an autonomous car using OpenCanvas, an extreme scale open platform for in-vehicle delivery of context-driven services and applications. The platform also handles on-device extreme scale and near real-time stream processing with the option of using AI accelerated hardware architectures. The combination of sharing context and specification at person-device pairing provides desired and robust performance in-vehicle delivery. The talk includes a demonstration of delivering context-related personalized & dynamically provisioned capabilities, applications & services.
OpenCanvas uses eclipse technologies such as mqtt, ditto, and orchestration of edge services with containers and Kubernetes.