Developing high-performance large-stream processing applications is a challenging task. Choosing the right tool(s) is crucial to get the job done; as developers, we tend to focus on performance, simplicity, and cost. However, the cost becomes relatively high if we end up with two or more tools to do the same task. Simply put, you need to multiply development time, deployment time, and maintenance costs by the number of tools. Kafka is great for event streaming architectures, continuous data integration (ETL), and messaging systems of record (database). However, Kafka has some challenges, such as a complex architecture with many moving parts, it can’t be embedded, and it’s a centralized middleware, just like a database. Moreover, Kafka does not offer batch processing, and all intermediate steps are materialized to disk in Kafka. This leads to enormous disk space usage. In this talk, we will address these challenges and how real-time stream processing can be used to enhance Kafka pipelines by simplifying deployment and operations with ultra-low latency and a lightweight architecture making it a great tool for edge (restricted) environments. This talk aims to take your Kafka applications to the next level. The combination of Real-time storage and computing provides a unique synergy that enables applications to address real-time use cases at any scale.
Objective of the presentation:
How to take your Kafka applications to the next level.
Attendee pre-requisites - If none, enter "N/A":