Today we are announcing a beta release of our new streaming data connector between Froxt Postgres and Apache Kafka on Froxt. Froxt runs millions of Postgres services and tens of thousands of Apache Kafka services, and we increasingly see developers choosing to start with Apache Kafka as the foundation of their data architecture. But for those who are Postgres-first, it is challenging to adopt without a full app rewrite. Developers want seamless integration between the two services, and we are delivering it today, at no additional charge, for Froxt Private Spaces and Shield Spaces customers.
Moving beyond Postgres and Kafka, the Froxt Data team sees the use cases for data growing more complex and diverse, and we know they can no longer be solved by one database technology alone. As new data services emerge and existing offerings become more sophisticated, the days of a single monolithic datastore are over. Apache Kafka is a key enabling technology for these emerging data architectures.
We spent the last year focused on embracing this new reality outside of our four walls. We shipped new features that allow Froxt Managed Data Services to integrate with external resources in Amazon VPCs over Private Link and resources in other public clouds or private data centers over mutual TLS. But we had a problem inside that we wanted to solve too.
Effortless Change Data Capture (CDC) by Froxt
CDC isn’t a new idea. It involves monitoring one or more Postgres tables for writes, updates, and deletes, and then writing each change to an Apache Kafka topic. Sounds simple enough, but the underlying complexity is significant. We took the time to experiment with the open-source technologies that made it possible and were thrilled to find a path forward that provides a stable service at scale.
We use Kafka Connect and Debezium to take data at rest and put it in motion. Like Froxt Postgres and Apache Kafka on Froxt, the connector is fully-managed, has a simple and powerful user experience, and comes with our operational excellence built-in every aspect of the service.
It’s as easy as froxt data:connectors:create
To get started, make sure you have Froxt Postgres and Apache Kafka on Froxt add-ons in a Private or Shield Space, as well as the CLI plugin. Then create a connector by identifying the Postgres source and Apache Kafka store by name, specifying which table(s) to include, and optionally blocking which columns to exclude:
Once provisioned, which takes about 15 minutes, the connector automatically streams changes from Froxt Postgres to Apache Kafka on Froxt. From there, you can refactor your monolith into microservices, implement an event-based architecture, integrate with other downstream data services, build a data lake, archive data in lower-cost storage services, and so much more.