Creating CI/CD workflows is crucial for automating and streamlining the software development lifecycle. This means:
- Faster and more reliable releases
- Fewer manual errors
- Better team collaboration
- Consistent environments
- Easier rollbacks and improved scalability
In this post, we'll break down the CI/CD setup used to integrate the electricity visualization project into Tinybird Forward. We'll go through the current setup, highlight key steps, and point out what's new compared to Tinybird Classic.
Let's take the electricity data visualization project built with Tinybird Forward as our example. There's already a blog post explaining the code and the project's scope, so now we'll integrate that project into a CI/CD workflow to make things more professional.
What is Tinybird Forward?
Tinybird Forward is a major evolution of the Tinybird platform, designed with developers in mind, especially for building real-time data apps and AI-native workflows.
Key features include:
- Local-first development with
tb local
: Run Tinybird in a local container to build and test with instant feedback, no more 30-second builds. - Data-as-code: Everything lives as plain text files in Git.
- Simplified CI/CD with single-command deploys, automatic schema migrations, and end-to-end tests.
CI/CD Workflow Structure
We've structured our workflow into three stages:
- Local Testing: Develop and test changes locally with
tb local
. - Staging Deployment: When a pull request is merged into the
staging
branch, a deployment is created in the cloud and a new pull request is auto-generated to promote those changes tomain
. - Production Deployment: Once changes in staging are verified, the
main
PR is merged and the staging deployment is promoted to live.
This gives us a safe and automated process with clear checkpoints to prevent accidental mistakes from hitting production.
CI (Continuous Integration)
The CI pipeline runs on pull requests targeting main
or staging
. Here's what it does:
name: Tinybird - CI Workflow
on:
workflow_dispatch:
pull_request:
branches: [main, staging]
types: [opened, reopened, labeled, unlabeled, synchronize]
concurrency: ${{ github.workflow }}-${{ github.event.pull_request.number }}
env:
TINYBIRD_HOST: ${{ secrets.TINYBIRD_HOST }}
TINYBIRD_TOKEN: ${{ secrets.TINYBIRD_TOKEN }}
jobs:
ci:
runs-on: ubuntu-latest
services:
tinybird:
image: tinybirdco/tinybird-local:latest
ports:
- 7181:7181
steps:
- uses: actions/checkout@v3
- name: Install Tinybird CLI
run: curl https://tinybird.co | sh
- name: Build project
run: tb build
- name: Test Tinybird project
run: tb test run
- name: Run Python tests
run: |
pip install pytest pyyaml
PYTHONPATH=ree_data_tracker/src pytest ree_data_tracker/tests
- name: Deployment check
run: tb --cloud --host ${{ env.TINYBIRD_HOST }} --token ${{ env.TINYBIRD_TOKEN }} deploy --check
So the steps of this workflow are,
- Install Tinybird CLI
- Build the project
- Test the Python code in the ree_data_tracker
- And run the checks on the Tinybird code to see if everything can be promoted to the cloud.
You might notice we were missing Tinybird tests at first. What happens when you change an endpoint or a data source? To fix this, we added this step:
- name: Run Tinybird test
run: |
cd tinybird
tb test run
Simple and powerful.
CD (Continuous Deployment)
Now let's break down the CD part. CD automatically promotes tested changes through staging to production.
deploy-staging
Runs when a PR is merged into the staging
branch using the TINYBIRD_TOKEN
of the staging workspace. It:
- Checks differences between
main
andstaging
- Installs the Tinybird Forward CLI
- Creates a deployment on Tinybird Cloud
- Creates a PR to promote changes to
main
This way, changes are deployed to a staging environment while the PR awaits merge.
deploy-production
Triggered when a PR is merged into main
, using the TINYBIRD_TOKEN
of the production workspace. It:
- Installs the Tinybird CLI
- Promotes the existing staging deployment to live
Bonus: A separate workflow ensures only PRs from staging
can be merged into main
.
Staging Dashboard Setup
Once a staging deployment is created, you'll probably want to test it visually. The dashboard uses a __tb__deployment
parameter to target the correct resources in staging. The dashboard will be a copy of the production but pointing to the staging deployment.
To fetch the deployment ID:
tb --cloud deployment ls
output
| ID | Status | Created at |
\--------------------------------------
| 40 | Staging | 2025-06-17 13:42:28 |
| 39 | Live | 2025-06-16 13:38:57 |
\--------------------------------------
Adding the replace of the deployment ID in the cd of staging deployment,
- name: Set deployment ID in Staging dashboard JSON
run: |
echo "🔧 Getting Staging deployment ID from Tinybird..."
DEPLOYMENT_ID=$(tb --cloud --host ${{ env.TINYBIRD_HOST }} --token ${{ env.TINYBIRD_TOKEN }} deployment ls | awk '/Staging/ { print $2 }')
if [ -z "$DEPLOYMENT_ID" ]; then
echo "⚠️ No active staging deployment found. Skipping JSON update."
exit 0
fi
cd grafana/dashboards
jq --arg id "$DEPLOYMENT_ID" '(.panels[].targets[].url_options.params[] | select(.key == "__tb__deployment").value) = $id' \
stg_electric_analysis.json > tmp && mv tmp stg_electric_analysis.json
git config --global user.name "GitHub Actions"
git config --global user.email "actions@github.com"
git add stg_electric_analysis.json
git commit -m "Update staging dashboard with deployment ID $DEPLOYMENT_ID"
git push origin staging
Now your staging dashboard will point to the right deployment
Example: Add a New Graph
Let's create a new graph to show the percentage of electricity generation by technology over time. The data is already in generation_mv
, so we just need a new endpoint:
generation_percentage_by_tech_ts.pipe
NODE generation_by_tech_node
DESCRIPTION >
Generation timeseries
SQL >
%
SELECT
toTimezone(datetime, 'Europe/Madrid') datetime,
metric_name,
value
FROM generation_mv FINAL
WHERE 1=1
{\% if defined(start_datetime) %}
AND toTimezone(datetime, 'Europe/Madrid') >= {{DateTime(start_datetime)}}
{\% end %}
{\% if defined(end_datetime) %}
AND toTimezone(datetime, 'Europe/Madrid') <= {{DateTime(end_datetime)}}
{\% end %}
NODE genereation_total_by_time
DESCRIPTION >
Total generation by time
SQL >
SELECT
datetime,
sum(value) total_generation
FROM generation_by_tech_node
WHERE value >= 0
GROUP BY datetime
NODE percentage_calculation
DESCRIPTION >
Making the calculation of the percentages
SQL >
SELECT
gtec.datetime,
gtec.metric_name,
gtec.value/ls.total_generation value
FROM generation_by_tech_node gtec
LEFT JOIN genereation_total_by_time ls
ON gtec.datetime = ls.datetime
TYPE ENDPOINT
Push to GitHub, open a PR to staging
, and let the CI/CD do the rest.
Once the PR is merged:
git checkout staging
git pull
You'll see the new graph in the STG - Electric System dashboard. If all looks good, merge the auto-generated PR to main
and the deployment goes live.
Bonus: Schema Iteration
Tinybird Forward makes schema changes a breeze.
With Tinybird Classic, changing a schema (e.g., column types or sorting keys) required manually recreating and repopulating the data source. With Forward, schema changes are handled for you. Just modify the .datasource
file and push, it's all automatic.
Need to change a datatype? Just add a ForwardQuery
to the .datasource
file to cast to the new type.
Check the Forward schema evolution docs for examples.
Conclusion
Tinybird Forward is built for developers. It enforces data-as-code and prevents accidental changes via the UI. While the CI/CD approach differs slightly from Classic, the core idea remains: automate deployments safely.
Testing in staging before promoting to production gives you confidence and control. It eliminates surprises, avoids downtime, and makes dev work a lot more fun.