How to ingest JSON data

In this guide you'll learn hoy to ingest unstructured data, like JSON to Tinybird.

A typical scenario consists on having a document based database or using nested records on your data warehouse. In those cases, sometimes is hard to flatten the records so they conform to a typical CSV file structure in which each field represents a column.

For cases like this, is easier to just just export your ``JSON`` objects as if they were a ``String`` in a CSV file, ingest them to Tinybird and then use the builtin ``JSON`` functions to prepare the data for real-time analytics as it's being ingested.

Ingesting JSON data

Let's use as an example a ``JSON`` object like this which contains arrays, objects and arrays of objects:

To be able to ingest it to a Tinybird Data Source, all you have to do is convert it to a ``String``, tacking into account it needs to be double quoted to avoid the Tinybird CSV parser to wrongly detect delimiters:

{% tip-box title="Double quotes" %}Note double quotes for each attribute and value are duplicated.{% tip-box-end %}

Now you can just ingest it using the Import API as a regular CSV file:

Instead of duplicating double quotes, you can use a escape character, for instance a backslash:

In this case, you have to provide the ``dialect_escapechar`` to the Import API like this:

Now you have a single column Data Source with the JSON String, you can further process.

For instance to flatten the ``JSON`` you can run a SQL query like this one:

{% tip-box title="builtin JSON functions" %}You can see the available builtin JSON functions by typing ``JSON`` on any pipe from the Tinybird UI with integrated documentation and auto-complete.{% tip-box-end %}

Integrated JSON builtin functions and auto-complete dialog

This and other transformations can be done on the fly while data is being ingested using materialized views.