Branches¶
Tinybird branches allow you to develop and test your project in ephemeral environments using production data.
Create a branch¶
tb branch create preview_1
If you want to use production data, you can use the --last-partition flag yo bring the last partition of the production data into the branch.
tb branch create preview_1 --last-partition
Start branch¶
tb --branch=preview_1 dev
Keep this terminal running while you are working in the branch.
tb dev will watch for changes in your project and rebuild it automatically.
» Building project... ✓ datasources/user_actions.datasource created ✓ endpoints/user_actions_line_chart.pipe created ✓ Rebuild completed in 0.2s
If you stop the process with Ctrl+C, you will stop the branch.
» Received shutdown signal, stopping... ✓ Branch 'preview_1' session stopped
Make changes to your project¶
While tb --branch=preview_1 dev is running, you can edit files in your project and see the changes automatically applied in your branch.
Using your editor¶
Open your editor of choice and start editing your project. The branch will automatically detect the change and rebuild your project:
» Building project... ✓ datasources/user_actions.datasource updated ✓ Rebuild completed in 0.3s
Using Tinybird UI¶
You can also use Tinybird UI to edit your project. Run the following command to open the Tinybird UI pointing to your branch.
tb --branch=preview_1 open
tb dev exposes your project as API, so you can edit it directly in the browser and see changes applied automatically.
When to use branches¶
Branches are a great way to test your project with real production data. You should use them in the following scenarios:
- You want to test your changes with real production data.
- You work with preview environments in your CI/CD pipeline before deploying to production.
- You don't use Docker and you want to test your changes without affecting production.
Test with connector data¶
When your project uses connector data sources (Kafka, S3, or GCS), you can test them in branches using the --with-connections flag and dedicated CLI commands. This lets you validate schema changes, test pipelines, and verify data transformations against real production data without affecting your production environment.
Enable connectors in branches¶
Use --with-connections when building or starting a branch:
tb --branch=preview_1 dev --with-connections
Or when building directly:
tb --branch=preview_1 build --with-connections
This creates data linkers for your connector data sources (S3, Kafka, GCS) in the branch.
Kafka: Pause and resume ingestion¶
By default, Kafka ingestion is stopped in branches. Use tb datasource start to begin ingesting and tb datasource stop to pause.
Each time you start ingestion, a new consumer group is created with a unique ID for the branch. This means ingestion starts from the latest offset, not from where it previously left off. Consumer group IDs are unique and don't collide with production or other branches. This is by design — branches are ephemeral testing environments, not production replicas.
Because each start creates a new consumer group, previous consumer groups become orphaned. Depending on your Kafka cluster's consumer group TTL (offsets.retention.minutes), these orphan groups may persist until they expire. Keep this in mind if you start and stop ingestion frequently across multiple branches.
tb --branch=preview_1 datasource start my_kafka_datasource # ... observe data flowing in, test your pipelines ... tb --branch=preview_1 datasource stop my_kafka_datasource
S3/GCS: Import sample data¶
Instead of syncing all files from the bucket (as in production), you can import a small sample to validate your schemas and pipelines. The sample import runs as a separate job — it doesn't affect production sync state or offsets.
Use the API to trigger a sample import:
curl -X POST "https://api.tinybird.co/v0/datasources/my_datasource/sample" \
-H "Authorization: Bearer $TB_TOKEN" \
-H "Content-Type: application/json" \
-d '{"max_files": 1}'
The response includes a job_id to track progress via GET /v0/jobs/{job_id}. You can import up to 10 files per request.
This is useful for:
- Validating that your schema matches the data format.
- Testing downstream pipes and endpoints.
- Verifying data transformations.
Connectors in branches are meant for testing and validation, not for replicating production workloads.