Building Banking Systems with Kafka Streams with Mateo Rojas | Ep. 28

Streaming Audio (Kafka / Confluent)

Building Banking Systems with Kafka Streams with Mateo Rojas | Ep. 28

Streaming Audio (Kafka / Confluent)Apr 20, 2026

Why It Matters

Understanding these real‑world struggles helps engineers avoid repeating early‑stage mistakes when designing event‑driven systems, especially in regulated domains like finance. The discussion highlights why choosing the right orchestration tool and security model is critical as streaming platforms mature and become central to business logic.

Key Takeaways

  • Early Kafka Streams joins required years-long window hacks.
  • K‑tables used for deduplication without exactly‑once semantics.
  • Encryption at rest unsolved; relied on TLS and limited retention.
  • Workflow engines now preferred over complex stream joins for orchestration.
  • Kafka as source of truth raises consistency and security trade‑offs.

Pulse Analysis

In the early days of Kafka Streams, developers like Mateo Rojas wrestled with windowed joins that demanded absurdly long time windows—sometimes years—to guarantee that disparate events could be correlated. Without mature exactly‑once semantics, teams resorted to oversized windows and K‑tables for deduplication, storing each incoming record to prevent double processing. These work‑arounds highlighted the steep learning curve of event‑driven microservices and the fragility of early stream orchestration, especially when coordinating KYC, AML, and banking approvals across multiple topics.

Security proved another formidable obstacle. The immutable nature of Kafka topics made at‑rest encryption a persistent challenge; rotating keys required re‑processing entire logs, an impractical task for a system treated as the source of truth. Teams settled for TLS in transit and short retention periods, while a dedicated decryption service handled occasional privileged reads. This approach balanced regulatory compliance with operational feasibility, yet underscored the trade‑offs between data confidentiality and the convenience of Kafka‑centric state reconstruction.

Today, the industry favors workflow engines over intricate stream joins for complex banking orchestration. Modern platforms provide built‑in state management, visual modeling, and reliable compensation mechanisms, reducing the need for custom window logic and manual deduplication. The debate over using Kafka as a primary source of truth continues, with practitioners weighing eventual consistency against strong consistency guarantees and security requirements. By learning from early missteps—oversized windows, ad‑hoc deduplication, and incomplete encryption—organizations can design more resilient, auditable, and secure streaming architectures that align with contemporary best practices.

Episode Description

Adi Polak talks to Mateo Rojas (LittleHorse) about his career working with Kafka Streams. Mateo’s first job: building a real-money policy management platform on early Kafka Streams. His challenge: working at LittleHorse with Kafka as a workflow engine and deciding whether it should be the source of truth.

SEASON 2

Hosted by Tim Berglund, Adi Polak and Viktor Gamov

Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed

Music by Coastal Kites 

Artwork by Phil Vo 

 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. 

▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes.

👍 If you enjoyed this, please leave us a rating. 

🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.

Show Notes

Comments

Want to join the conversation?

Loading comments...