We’re happy to announce the latest release of Conduit. While previous releases of Conduit have focussed on particular features for this release we’ve made our focus performance. Our goal is to make Conduit the default tool for data movement and being able to handle workloads that demand high levels of throughput is critical to achieving that goal.
We’re happy to report that we’ve been able to boost performance by over 2.5x to almost 70k msg/s through a single kafka-to-kafka pipeline. We achieved this performance increase with various improvements to the core of Conduit itself and to our Kafka Connector as well.
We’ve made great strides in improving Conduit’s performance but there are still additional improvements we’re eyeing. One of the most promising areas is micro-batching. With micro-batching N records are combined into a single record for processing and then split into N records again for writing to the destination. With this experimental batching work we’ve been able to push almost 250K msg/s through a single pipeline. This is really exciting and shows just how much more room the team has to improve performance.
If you’d like to check it out the experimental work on micro-batching can be found in a branch in the Conduit repo.
We’d love your feedback!
Check out the full release notes on the Conduit Changelog.