Stream files of any size and any content type through Kafka. Our new Rust-based file streaming pipeline for Linux and macOS delivers high-performance file transfers with enterprise reliability.
Our Rust-based File-streaming Pipeline for Linux and macOS brings powerful capabilities to your file transfer needs
Built with Rust for optimal performance, delivering exceptional throughput and reliability for streaming files of any size.
Available as macOS executables, Linux executables, Docker container, or Kubernetes Custom Resource for maximum deployment flexibility.
Stream files of any type and any size - documents, audio, video, images - by efficiently chunking them into manageable Kafka messages.
Built-in retry mechanism ensures reliable file transfers even over unstable network connections - "click send and forget" with confidence.
Chunks of files are sent in parallel using Kafka topic partitions and consumer groups for maximum throughput and efficiency.
Automatic MD5 checksum verification ensures files are reassembled perfectly at the destination with data integrity guaranteed.
The Streamsend File-chunk Pipeline works by efficiently chunking and reassembling files
The Uploader splits input files into fixed-size binary chunks and sends them as Kafka messages with metadata headers.
Messages flow through your Kafka cluster with all the benefits of Kafka's reliability, scalability, and security.
The Downloader consumes these chunks, reorders them if needed, and reassembles the original file at the destination.
MD5 checksums are calculated and verified to ensure the reassembled file is identical to the source file.
The Streamsend File-chunk Pipeline excels in these challenging file transfer scenarios
Transfer large files from remote field teams with poor network connections. Our "click send and forget" approach handles service interruptions automatically with infinite retries.
Insurance providers can send updated policy PDFs alongside event messages from microservices. Use your existing Kafka infrastructure for both events and file transfers.
Stream files from one region to another through a central Kafka cluster. Fan-in from many uploaders to one downloader or fan-out from one uploader to many downloaders with ease.
The Streamsend File-chunk Pipeline enables three powerful patterns for enterprise file streaming
Many Uploaders stream to a Kafka topic for one Downloader. Perfect for retail point-of-sale terminals, dealership networks, or field devices that need to funnel data to a centralized system.
Add file streaming capabilities to your existing event streaming network. Use the same Kafka infrastructure for both events and file transfers, simplifying your data architecture.
Leverage Kafka's robust data-send semantics to ensure files always stream eventually, even on unreliable networks. Enjoy infinite retries, guaranteed delivery, flexible encryption, compression, and configurable in-flight connections.
Understanding the fundamentals of the Streamsend File-chunk Pipeline
Files are split along binary chunk boundaries (not based on lines or objects), allowing any file type to be streamed efficiently.
A complete pipeline consists of one (or more) Uploaders that produce chunks and one (or more) Downloaders that consume and reassemble them.
Subdirectory structures are preserved from source to destination, enabling additional metadata organization.
Choose from macOS executables, Linux executables, Docker container, or Kubernetes deployment.