Stream Processing Meetup Munich 25/4

Sign up - Stream Processing Meetup - April, Tue, Apr 25, 2023, 6:00 PM | Meetup

Join us on April 25th for an evening of talks and networking around stream processing and building real-time data applications. This event is sponsored by Quix, a platform for building real-time applications.

Doors and drinks from 18:00
Talks from 18:30

Interested in speaking? We’re always on the look out for people who could share an interesting story around something they’re building in the streaming/real time space. First-time speakers are welcome! Email me -

Delete Your Database. The Stream is Your Source of Truth - Tun Shwe, VP of Data at Quix

Moving from batch to streaming requires both adjustments in your team’s architecture as well as your thinking. In this talk Tun will examine the challenges he sees data engineering teams face when moving to streaming. - How to Make Your Data Scientists Love Real-time - Ralph M. Debusmann, Enterprise Kafka Engineer, Migros-Genossenschafts-Bund

Implementing real-time data pipelines is still a challenge. Even more so for data scientists who have often been brought up with batch processing and files and typically have only heard of Kafka, but never really used it.

Now if your team consists only of data scientists and you want them to implement a real-time data pipeline, your fate seems to be sealed. You need to hire real-time and streaming experts first - and you are guaranteed to lose months before you can even start implementing. But is there really no other way?

At Forecasty.AI, developer of a SaaS platform for commodity price forecasting called Commodity Desk, we thought again. And came up with a way how to bridge the gap between the batch and file-based world of most data scientists and the world of real-time and streaming.

Our solution is a new Open Source library allowing any Python programmer, including the aforementioned data scientists, to access the Kafka API in an easier way than ever before. In this session, we show you how can be used to bring the two disparate worlds of files and streaming together - and thus not only save a lot of time and money hiring real-time and streaming experts, but also make your data scientists start loving real-time.