Back in June we launched Compute-Compute Separation for Populates, because big backfills or heavy transformations shouldn't slow down your production load or force you to over-provision your cluster. Tinybird can spin up extra compute just for those demanding jobs — and only when they happen — so you get the speed you need without paying for idle capacity.
Since the launch, we've made this even better:
- Faster start times – Replicas are ready 4x times faster, making it viable and reasonable even for smaller jobs.
- Tuned performance – Settings are optimized by type of populate performed for maximum throughput.
- More control via CLI – You can now choose which populates to run with extra compute directly from the Tinybird CLI in Classic.
- Better observability – See how long the job will take and track progress in real time.
- Now in GCP – Extra compute is available for GCP workloads, not just AWS.
The result: bigger, faster populates without slowing down your primary workloads — and without paying for resources you don't need 24/7.
How can you use it?
Tinybird Classic
Using compute-compute separation for populates in Tinybird Classic is straightforward. You can enable it from the Tinybird CLI when running your populate operations.
When pushing a materialization and triggering the populate:
tb push pipes/my_materialized_view.pipe --populate --on-demand-compute
Or just when triggering the populate:
tb pipe populate pipes/my_materialized_view.pipe --on-demand-compute
Check the docs for more details.
Tinybird Forward
In Forward, populates are done automatically during deployments for enabled workspaces. If you want to speed up your deployments just ping us at support@tinybird.co and we will enable the feature for you.
The possibilities it opens
Compute-compute separation for populates aligns with Tinybird's core mission: enabling developers to ship faster. Schema iterations and new materialized views creations will no longer be slow or impact the main instance's workload.
Having ephemeral replicas also places the first step to enabling more separated workloads. A sink or a copy pipe are the most obvious examples, but other things like having a replica for your BI or exploratory workloads that will not affect your production apps, having a replica for queries from the MCP server, etc. will also be possible.
If you are interested in these features we'd love to hear from you.
How did we build it?
Building compute-compute separation required solving several complex technical challenges. We needed to create a system that could provision cloud resources on-demand, manage them efficiently, and ensure they integrate seamlessly with existing Tinybird infrastructure.
Key Technical Decisions
Multi-Cloud Support: We built the system to work with both AWS and GCP from the ground up, using a strategy pattern that allows different cloud implementations while maintaining a consistent interface.
Infrastructure as Code: We chose Pulumi over Terraform for better programmability and integration with our existing Python-based services. This allows us to manage complex cloud resources programmatically.
Asynchronous Setup: Instance setup uses Kubernetes jobs to handle the ClickHouse configuration process asynchronously, ensuring the API remains responsive while setup operations complete in the background.
Performance Optimizations
Faster Provisioning: We optimized the provisioning process to reduce startup time from 20 minutes to ~5 minutes by:
- Pre-warming common AMIs and images
- Optimizing Pulumi stack creation
- Implementing parallel resource creation where possible
Workload-Specific Tuning: Depending on the kind of query (aggregations, transformations, simple selects...) we tune the Insert Query settings to get the maximum possible throughput.
The result is a robust, scalable system that can handle the most demanding populate operations while maintaining the reliability and performance that Tinybird users expect.
For more details on the technology behind it, stay tuned for a more thorough technical post we will publish soon.